The total degrees of freedom (DF) are the amount of information in your data. The analysis uses that information to estimate the values of unknown population parameters. The total DF is determined by the number of observations in your sample. The DF for a term show how much information that term uses. Increasing your sample size provides more information about the population, which increases the total DF. Increasing the number of terms in your model uses more information, which decreases the DF available to estimate the variability of the parameter estimates.
Adjusted sums of squares are measures of variation for different components of the model. The order of the predictors in the model does not affect the calculation of the adjusted sums of squares. In the Analysis of Variance table, Minitab separates the sums of squares into different components that describe the variation due to different sources.
Minitab uses the adjusted sums of squares to calculate the p-value for a term. Minitab also uses the sums of squares to calculate the R2 statistic. Usually, you interpret the p-values and the R2 statistic instead of the sums of squares.
Adjusted mean squares (MS) measure how much variation a term or a model explains, assuming that all other terms are in the model, regardless of the order they were entered. Unlike the adjusted sums of squares, the adjusted mean squares consider the degrees of freedom.
The adjusted mean square of the error (also called MSE or s2) is the variance around the fitted values.
Minitab uses the adjusted mean square to calculate the p-value for a term. Minitab also uses the adjusted mean squares to calculate the adjusted R2 statistic. Usually, you interpret the p-values and the adjusted R2 statistic instead of the adjusted mean squares.
An F-value appears for each term in the Analysis of Variance table. The F-value is the test statistic used to determine whether the term is associated with the response.
Minitab uses the F-value to calculate the p-value, which you use to make a decision about the statistical significance of the terms and model. The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.
A sufficiently large F-value indicates that the term or model is significant.
If you want to use the F-value to determine whether to reject the null hypothesis, compare the F-value to your critical value. You can calculate the critical value in Minitab or find the critical value from an F-distribution table in most statistics books. For more information on using Minitab to calculate the critical value, go to Using the inverse cumulative distribution function (ICDF) and click "Use the ICDF to calculate critical values".
The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.
Use the Means table to understand the statistically significant differences between the factor levels in your data. The mean of each group provides an estimate of each population mean. Look for differences between group means for terms that are statistically significant.