Select the method or formula of your choice.

In matrix terms, these are the formulas for the different sums of squares:

Minitab breaks down the SS Regression or SS Treatments component into the amount of variation explained by each term using both the sequential sum of squares and adjusted sum of squares.

Term | Description |
---|---|

b | vector of coefficients |

X | design matrix |

Y | vector of response values |

n | number of observations |

J | n by n matrix of 1s |

Minitab breaks down the SS Regression or Treatments component of variance into sequential sums of squares for each factor. The sequential sums of squares depend on the order the factors or predictors are entered into the model. The sequential sum of squares is the unique portion of SS Regression explained by a factor, given any previously entered factors.

For example, if you have a model with three factors or predictors, X1, X2, and X3, the sequential sum of squares for X2 shows how much of the remaining variation X2 explains, given that X1 is already in the model. To obtain a different sequence of factors, repeat the analysis and enter the factors in a different order.

The degrees of freedom for each component of the model are:

Sources of variation | DF |
---|---|

Regression | p |

Error | n – p – 1 |

Total | n – 1 |

If your data meet certain criteria and the model includes at least one continuous predictor or more than one categorical predictor, then Minitab uses some degrees of freedom for the lack-of-fit test. The criteria are as follows:

- The data contain multiple observations with the same predictor values.
- The data contain the correct points to estimate additional terms that are not in the model.

Term | Description |
---|---|

n | number of observations |

p | number of coefficients in the model, not counting the constant |

The formula for the Mean Square (MS) of the regression is:

Term | Description |
---|---|

mean response | |

i^{th} fitted response | |

p | number of terms in the model |

The Mean Square of the error (also abbreviated as MS Error or MSE, and denoted as s^{2}) is the variance around the fitted regression line. The formula is:

Term | Description |
---|---|

y_{i} | i^{th} observed response value |

i^{th} fitted response | |

n | number of observations |

p | number of coefficients in the model, not counting the constant |

The formula for the total Mean Square (MS) is:

Term | Description |
---|---|

mean response | |

y_{i} | i^{th} observed response value |

n | number of observations |

The formulas for the F-statistics are as follows:

- F(Regression)
- F(Term)
- F(Lack-of-fit)

Term | Description |
---|---|

MS Regression | A measure of the variation in the response that the current model explains. |

MS Error | A measure of the variation that the model does not explain. |

MS Term | A measure of the amount of variation that a term explains after accounting for the other terms in the model. |

MS Lack-of-fit | A measure of variation in the response that could be modeled by adding more terms to the model. |

MS Pure error | A measure of the variation in replicated response data. |

The p-value is a probability that is calculated from an F-distribution with the degrees of freedom (DF) as follows:

- Numerator DF
- sum of the degrees of freedom for the term or the terms in the test
- Denominator DF
- degrees of freedom for error

1 − P(*F* ≤ *f _{j}*)

Term | Description |
---|---|

P(F ≤ f) | cumulative distribution function for the F-distribution |

f | f-statistic for the test |

To calculate the pure error lack-of-fit test, Minitab calculates:

- The sum of squared deviations of the response from the mean within each set of replicates and adds them together to create the pure error sum of squares (SS PE).
- The pure error mean square
where n = number of observations and m = number of distinct x-level combinations

- The lack-of-fit sum of squares
- The lack-of-fit mean square
- The test statistics

Large F-values and small p-values suggest that the model is inadequate.

This p-value is for the test of the null hypothesis that the coefficients are 0 for any terms that are possible to estimate from these data that are not in the model. The p-value is the probability from an F distribution with degrees of freedom (DF) as follows:

- Numerator DF
- degrees of freedom for lack-of-fit
- Denominator DF
- degrees of freedom for pure error

1 − P(*F* ≤ *f _{j}*)

Term | Description |
---|---|

P(F ≤ f)_{j} | cumulative distribution function for the F-distribution |

f_{j} | f-statistic for the test |