Mean square values are variance estimates. These values are used in ANOVA and
Regression analyses to determine whether model terms are significant.

Mean squares represent an estimate of population variance. It is calculated by dividing the corresponding sum of squares by the degrees of freedom.

In regression, mean squares are used to determine whether terms in the model are significant.

- The term mean square is obtained by dividing the term sum of squares by the degrees of freedom.
- The mean square of the error (MSE) is obtained by dividing the sum of squares of the residual error by the degrees of freedom. The MSE is the variance (s
^{2}) around the fitted regression line.

Dividing the MS (term) by the MSE gives F, which follows the F-distribution with degrees of freedom for the term and degrees of freedom for error.

In ANOVA, mean squares are used to determine whether factors (treatments) are significant.

- The treatment mean square is obtained by dividing the treatment sum of squares by the degrees of freedom. The treatment mean square represents the variation between the sample means.
- The mean square of the error (MSE) is obtained by dividing the sum of squares of the residual error by the degrees of freedom. The MSE represents the variation within the samples.

For example, you do an experiment to test the effectiveness of three laundry detergents. You collect 20 observations for each detergent. The variation in means between Detergent 1, Detergent 2, and Detergent 3 is represented by the treatment mean square. The variation within the samples is represented by the mean square of the error.

Adjusted mean squares are calculated by dividing the adjusted sum of squares by the degrees of freedom. The adjusted sum of squares does not depend on the order the factors are entered into the model. It is the unique portion of SS Regression explained by a factor, assuming all other factors in the model, regardless of the order they were entered into the model.

For example, if you have a model with three factors, X1, X2, and X3, the adjusted sum of squares for X2 shows how much of the remaining variation X2 explains, assuming that X1 and X3 are also in the model.

If you do not specify any factors to be random, Minitab assumes that they are fixed. In this case, the denominator for F-statistics will be the MSE. However, for models which include random terms, the MSE is not always the correct error term. You can examine the expected means squares to determine the error term that was used in the F-test.

When you perform General Linear Model, Minitab displays a table of expected mean squares, estimated variance components, and the error term (the denominator mean squares) used in each F-test by default. The expected mean squares are the expected values of these terms with the specified model. If there is no exact F-test for a term, Minitab solves for the appropriate error term in order to construct an approximate F-test. This test is called a synthesized test.

The estimates of variance components are the unbiased ANOVA estimates. They are obtained by setting each calculated mean square equal to its expected mean square, which gives a system of linear equations in the unknown variance components that is then solved. Unfortunately, this approach can cause negative estimates, which should be set to zero. Minitab, however, displays the negative estimates because they sometimes indicate that the model being fit is inappropriate for the data. Variance components are not estimated for fixed terms.