Find definitions and interpretations for every statistic in the Analysis of Variance table.

The total degrees of freedom (DF) are the amount of information in your data. The analysis uses that information to estimate the values of unknown population parameters. The total DF is determined by the number of observations in your sample. The DF for a term show how much information that term uses. Increasing your sample size provides more information about the population, which increases the total DF. Increasing the number of terms in your model uses more information, which decreases the DF available to estimate the variability of the parameter estimates.

Adjusted sums of squares are measures of variation for different components of the model. The order of the predictors in the model does not affect the calculation of the adjusted sums of squares. In the Analysis of Variance table, Minitab separates the sums of squares into different components that describe the variation due to different sources.

- Adj SS Term
- The adjusted sum of squares for a term is the increase in the regression sum of squares compared to a model with only the other terms. It quantifies the amount of variation in the response data that is explained by each term in the model.
- Adj SS Error
- The error sum of squares is the sum of the squared residuals. It quantifies the variation in the data that the predictors do not explain.
- Adj SS Total
- The total sum of squares is the sum of the term sum of squares and the error sum of squares. It quantifies the total variation in the data.

Minitab uses the adjusted sums of squares to calculate the p-value for a term. Minitab also uses the sums of squares to calculate the R^{2} statistic. Usually, you interpret the p-values and the R^{2} statistic instead of the sums of squares.

Adjusted mean squares (MS) measure how much variation a term or a model explains, assuming that all other terms are in the model, regardless of the order they were entered. Unlike the adjusted sums of squares, the adjusted mean squares consider the degrees of freedom.

The adjusted mean square of the error (also called MSE or s^{2}) is the variance around the fitted values.

Minitab uses the adjusted mean square to calculate the p-value for a term. Minitab also uses the adjusted mean squares to calculate the adjusted R^{2} statistic. Usually, you interpret the p-values and the adjusted R^{2} statistic instead of the adjusted mean squares.

An F-value appears for each term in the Analysis of Variance table. The F-value is the test statistic used to determine whether the term is associated with the response.

Minitab uses the F-value to calculate the p-value, which you use to make a decision about the statistical significance of the terms and model. The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.

A sufficiently large F-value indicates that the term or model is significant.

If you want to use the F-value to determine whether to reject the null hypothesis, compare the F-value to your critical value. You can calculate the critical value in Minitab or find the critical value from an F-distribution table in most statistics books. For more information on using Minitab to calculate the critical value, go to Using the inverse cumulative distribution function (ICDF) and click "Use the ICDF to calculate critical values".

The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.

To determine whether the association between the response and each term in the model is statistically significant, compare the p-value for the term to your significance level to assess the null hypothesis. The null hypothesis is that there is no association between the term and the response. Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that an association exists when there is no actual association.

- P-value ≤ α: The association is statistically significant
- If the p-value is less than or equal to the significance level, you can conclude that there is a statistically significant association between the response variable and the term.
- P-value > α: The association is not statistically significant
- If the p-value is greater than the significance level, you cannot conclude that there is a statistically significant association between the response variable and the term. You may want to refit the model without the term.
- If there are multiple predictors without a statistically significant association with the response, you can reduce the model by removing terms one at a time. For more information on removing terms from the model, go to Model reduction.
- For this analysis in Minitab, the model must be hierarchical. In a hierarchical model, all lower-order terms that comprise the higher-order terms also appear in the model. For example, a model that includes the interaction term A*B*C is hierarchical if it includes these terms: A, B, C, A*B, A*C, and B*C.

If a model term is statistically significant, the interpretation depends on the type of term. The interpretations are as follows:

- If a fixed factor is significant, you can conclude that not all the level means are equal.
- If a random factor is significant, you can conclude that the factor contributes to the amount of variation in the response.
- If an interaction term is significant, the relationship between a factor and the response depends on the other factors in the term. In this case, you should not interpret the main effects without considering the interaction effect.

Use the Means table to understand the statistically significant differences between the factor levels in your data. The mean of each group provides an estimate of each population mean. Look for differences between group means for terms that are statistically significant.