Interpret all statistics and graphs for Two-way ANOVA

Find definitions and interpretation guidance for every statistic and graph that is provided with two-way ANOVA.

Adj MS

Adjusted mean squares measure how much variation a term or a model explains, assuming that all other terms are in the model, regardless of the order they were entered. Unlike the adjusted sums of squares, the adjusted mean squares consider the degrees of freedom.

The adjusted mean square of the error (also called MSE or s2) is the variance around the fitted values.

Interpretation

Minitab uses the adjusted mean squares to calculate the p-value for a term. Minitab also uses the adjusted mean squares to calculate the adjusted R2 statistic. Usually, you interpret the p-values and the adjusted R2 statistic instead of the adjusted mean squares.

Adj SS

Adjusted sums of squares are measures of variation for different components of the model. The order of the predictors in the model does not affect the calculation of the adjusted sum of squares. In the Analysis of Variance table, Minitab separates the sums of squares into different components that describe the variation due to different sources.

Adj SS Term
The adjusted sum of squares for a term is the increase in the regression sum of squares compared to a model with only the other terms. It quantifies the amount of variation in the response data that is explained by each term in the model.
Adj SS Error
The error sum of squares is the sum of the squared residuals. It quantifies the variation in the data that the predictors do not explain.
Adj SS Total
The total sum of squares is the sum of the term sum of squares and the error sum of squares. It quantifies the total variation in the data.

Interpretation

Minitab uses the adjusted sums of squares to calculate the p-value for a term. Minitab also uses the sums of squares to calculate the R2 statistic. Usually, you interpret the p-values and the R2 statistic instead of the sums of squares.

DF

The total degrees of freedom (DF) are the amount of information in your data. The analysis uses that information to estimate the values of unknown population parameters. The total DF is determined by the number of observations in your sample. The DF for a term show how much information that term uses. Increasing your sample size provides more information about the population, which increases the total DF. Increasing the number of terms in your model uses more information, which decreases the DF available to estimate the variability of the parameter estimates.

If two conditions are met, then Minitab partitions the DF for error. The first condition is that there must be terms you can fit with the data that are not included in the current model. For example, if you have a continuous predictor with 3 or more distinct values, you can estimate a quadratic term for that predictor. If the model does not include the quadratic term, then a term that the data can fit is not included in the model and this condition is met.

The second condition is that the data contain replicates. Replicates are observations where each predictor has the same value. For example, if you have 3 observations where pressure is 5 and temperature is 25, then those 3 observations are replicates.

If the two conditions are met, then the two parts of the DF for error are lack-of-fit and pure error. The DF for lack-of-fit allow a test of whether the model form is adequate. The lack-of-fit test uses the degrees of freedom for lack-of-fit. The more DF for pure error, the greater the power of the lack-of-fit test.

Fits

Fitted values are also called fits or . The fitted values are point estimates of the mean response for given values of the factor levels.

Interpretation

Fitted values are calculated by entering the specific x-values for each observation in the data set into the model equation.

Observations with fitted values that are very different from the observed value may be unusual or influential. If Minitab determines that your data include unusual values, your output includes the table of Fits and Diagnostics for Unusual Observations, which identifies the unusual observations. For more information on unusual values, go to Unusual observations.

Fitted means

Fitted means use the least squares estimation method to predict the mean response values of a balanced design for each group. Data means are the raw response variable means for each factor level combination.

Therefore, the two types of means are identical for balanced designs but can be different for unbalanced designs. Fitted means are useful for observing response differences that are caused by changes in factor levels rather than differences caused by the disproportionate influence of unbalanced experimental conditions.

Interpretation

The fitted means calculated from the sample are estimates of the population mean for each group.

F-value

An F-value appears for each term in the Analysis of Variance table:
F-value for the model or the terms
The F-value is the test statistic used to determine whether the term is associated with the response.
F-value for the lack-of-fit test
The F-value is the test statistic used to determine whether the model is missing higher-order terms that include the predictors in the current model.

Interpretation

Minitab uses the F-value to calculate the p-value, which you use to make a decision about the statistical significance of the terms and model. The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.

A sufficiently large F-value indicates that the term or model is significant.

If you want to use the F-value to determine whether to reject the null hypothesis, compare the F-value to your critical value. You can calculate the critical value in Minitab or find the critical value from an F-distribution table in most statistics books. For more information on using Minitab to calculate the critical value, go to Using the inverse cumulative distribution function (ICDF) and click "Use the ICDF to calculate critical values".

Histogram of residuals

The histogram of the residuals shows the distribution of the residuals for all observations.

Interpretation

Use the histogram of the residuals to determine whether the data are skewed or include outliers. The patterns in the following table may indicate that the model does not meet the model assumptions.
Pattern What the pattern may indicate
A long tail in one direction Skewness
A bar that is far away from the other bars An outlier

Because the appearance of a histogram depends on the number of intervals used to group the data, don't use a histogram to assess the normality of the residuals. Instead, use a normal probability plot.

A histogram is most effective when you have approximately 20 or more data points. If the sample is too small, then each bar on the histogram does not contain enough data points to reliably show skewness or outliers.

Normal probability plot of the residuals

The normal plot of the residuals displays the residuals versus their expected values when the distribution is normal.

Interpretation

Use the normal probability plot of residuals to verify the assumption that the residuals are normally distributed. The normal probability plot of the residuals should approximately follow a straight line.

The following patterns violate the assumption that the residuals are normally distributed.

S-curve implies a distribution with long tails.

Inverted S-curve implies a distribution with short tails.

Downward curve implies a right-skewed distribution.

A few points lying away from the line implies a distribution with outliers.

If you see a nonnormal pattern, use the other residual plots to check for other problems with the model, such as missing terms or a time order effect. If the residuals do not follow a normal distribution, prediction intervals can be inaccurate. If the residuals do not follow a normal distribution and the data have fewer than 15 observations, then confidence intervals for predictions, confidence intervals for coefficients, and p-values for coefficients can be inaccurate.

P-value – Term

The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.

Interpretation

To determine whether each main effect and the interaction effect is statistically significant, compare the p-value for each term to your significance level to assess the null hypothesis. Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that an effect exists when there is no actual effect.
  • The null hypothesis for a main effect is that the response mean for all factor levels are equal.
  • The null hypothesis for an interaction effect is that the response mean for the level of one factor does not depend on the value of the other factor level.
The statistical significance of the effect depends on the p-value, as follows:
  • If the p-value is greater than the significance level you selected, the effect is not statistically significant.
  • If the p-value is less than or equal to the significance level you selected, then the effect for the term is statistically significant.

    The following shows how to interpret significant main effects and interaction effects.

    • If the main effect of a factor is significant, the difference between some of the factor level means are statistically significant.
    • If an interaction term is statistically significant, the relationship between a factor and the response differs by the level of the other factor. In this case, you should not interpret the main effects without considering the interaction effect.

P-value – Lack-of-fit

The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.

Interpretation

To determine whether the model correctly specifies the relationship between the response and the predictors, compare the p-value for the lack-of-fit test to your significance level to assess the null hypothesis. The null hypothesis for the lack-of-fit test is that the model correctly specifies the relationship between the response and the predictors. Usually, a significance level (denoted as alpha or α) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that the model correctly specifies the relationship between the response and the predictors when the model does not.
P-value ≤ α: The lack-of-fit is statistically significant
If the p-value is less than or equal to the significance level, you conclude that the model does not correctly specify the relationship. To improve the model, you may need to add terms or transform your data.
P-value > α: The lack-of-fit is not statistically significant

If the p-value is larger than the significance level, the test does not detect any lack-of-fit.

Resid

A residual (ei) is the difference between an observed value (y) and the corresponding fitted value, (), which is the value predicted by the model.

This scatterplot displays the weight versus the height for a sample of adult males. The fitted regression line represents the relationship between height and weight. If the height equals 6 feet, the fitted value for weight is 190 pounds. If the actual weight is 200 pounds, the residual is 10.

Interpretation

Plot the residuals to determine whether your model is adequate and meets the assumptions of regression. Examining the residuals can provide useful information about how well the model fits the data. In general, the residuals should be randomly distributed with no obvious patterns and no unusual values. If Minitab determines that your data include unusual observations, it identifies those observations in the Fits and Diagnostics for Unusual Observations table in the output. The observations that Minitab labels as unusual do not follow the proposed regression equation well. However, it is expected that you will have some unusual observations. For example, based on the criteria for large residuals, you would expect roughly 5% of your observations to be flagged as having a large residual. For more information on unusual values, go to Unusual observations.

Residuals versus fits

The residuals versus fits graph plots the residuals on the y-axis and the fitted values on the x-axis.

Interpretation

Use the residuals versus fits plot to verify the assumption that the residuals are randomly distributed and have constant variance. Ideally, the points should fall randomly on both sides of 0, with no recognizable patterns in the points.

The patterns in the following table may indicate that the model does not meet the model assumptions.
Pattern What the pattern may indicate
Fanning or uneven spreading of residuals across fitted values Nonconstant variance
Curvilinear A missing higher-order term
A point that is far away from zero An outlier
A point that is far away from the other points in the x-direction An influential point
The following graphs show an outlier and a violation of the assumption that the residuals are constant.
Plot with outlier

One of the points is much larger than all of the other points. Therefore, the point is an outlier. If there are too many outliers, the model may not be acceptable. You should try to identify the cause of any outlier. Correct any data entry or measurement errors. Consider removing data values that are associated with abnormal, one-time events (special causes). Then, repeat the analysis.

Plot with nonconstant variance

The variance of the residuals increases with the fitted values. Notice that, as the value of the fits increases, the scatter among the residuals widens. This pattern indicates that the variances of the residuals are unequal (nonconstant).

If you identify any patterns or outliers in your residual versus fits plot, consider the following solutions:

Issue Possible solution
Nonconstant variance Consider using Stat > ANOVA > General Linear Model in Minitab Statistical Software. Click Options. Under Box-Cox transformation, select Optimal λ.
An outlier or influential point
  1. Verify that the observation is not a measurement error or data-entry error.
  2. Consider performing the analysis without this observation to determine how it impacts your results.

Residuals versus order

The residual versus order plot displays the residuals in the order that the data were collected.

Interpretation

Use the residuals versus order plot to verify the assumption that the residuals are independent from one another. Independent residuals show no trends or patterns when displayed in time order. Patterns in the points may indicate that residuals near each other may be correlated, and thus, not independent. Ideally, the residuals on the plot should fall randomly around the center line:
If you see a pattern, investigate the cause. The following types of patterns may indicate that the residuals are dependent.
Trend
Shift
Cycle

Residuals versus variables

The residuals versus variables plot displays the residuals versus another variable. The variable could already be included in your model. Or, the variable may not be in the model, but you suspect it affects the response.

Interpretation

If you see a non-random pattern in the residuals, it indicates that the variable affects the response in a systematic way. Use Stat > ANOVA > General Linear Model in Minitab Statistical Software to refit the model with a term for this variable.

R-sq

R2 is the percentage of variation in the response that is explained by the model. It is calculated as 1 minus the ratio of the error sum of squares (which is the variation that is not explained by model) to the total sum of squares (which is the total variation in the model).

Interpretation

Use R2 to determine how well the model fits your data. The higher the R2 value, the better the model fits your data. R2 is always between 0% and 100%.

You can use a fitted line plot to graphically illustrate different R2 values. The first plot illustrates a simple regression model that explains 85.5% of the variation in the response. The second plot illustrates a model that explains 22.6% of the variation in the response. The more variation that is explained by the model, the closer the data points fall to the fitted regression line. Theoretically, if a model could explain 100% of the variation, the fitted values would always equal the observed values and all of the data points would fall on the fitted line.
Consider the following issues when interpreting the R2 value:
  • R2 always increases when you add additional predictors to a model. For example, the best five-predictor model will always have an R2 that is at least as high the best four-predictor model. Therefore, R2 is most useful when you compare models of the same size.

  • Small samples do not provide a precise estimate of the strength of the relationship between the response and predictors. If you need R2 to be more precise, you should use a larger sample (typically, 40 or more).

  • R2 is just one measure of how well the model fits the data. Even when a model has a high R2, you should check the residual plots to verify that the model meets the model assumptions.

R-sq (adj)

Adjusted R2 is the percentage of the variation in the response that is explained by the model, adjusted for the number of predictors in the model relative to the number of observations. Adjusted R2 is calculated as 1 minus the ratio of the mean square error (MSE) to the mean square total (MS Total).

Interpretation

Use adjusted R2 when you want to compare models that have different numbers of predictors. R2 always increases when you add a predictor to the model, even when there is no real improvement to the model. The adjusted R2 value incorporates the number of predictors in the model to help you choose the correct model.

For example, you work for a potato chip company that examines the factors that affect the percentage of crumbled potato chips per container. You receive the following results as you add the predictors in a forward stepwise approach:
Step % Potato Cooling rate Cooking temp R2 Adjusted R2 P-value
1 X     52% 51% 0.000
2 X X   63% 62% 0.000
3 X X X 65% 62% 0.000

The first step yields a statistically significant regression model. The second step adds cooling rate to the model. Adjusted R2 increases, which indicates that cooling rate improves the model. The third step, which adds cooking temperature to the model, increases the R2 but not the adjusted R2. These results indicate that cooking temperature does not improve the model. Based on these results, you consider removing cooking temperature from the model.

R-sq (pred)

Predicted R2 is calculated with a formula that is equivalent to systematically removing each observation from the data set, estimating the regression equation, and determining how well the model predicts the removed observation. The value of predicted R2 ranges between 0% and 100%.

Interpretation

Use predicted R2 to determine how well your model predicts the response for new observations. Models that have larger predicted R2 values have better predictive ability.

A predicted R2 that is substantially less than R2 may indicate that the model is over-fit. An over-fit model occurs when you add terms for effects that are not important in the population, although they may appear important in the sample data. The model becomes tailored to the sample data and therefore, may not be useful for making predictions about the population.

Predicted R2 can also be more useful than adjusted R2 for comparing models because it is calculated with observations that are not included in the model calculation.

For example, an analyst at a financial consulting company develops a model to predict future market conditions. The model looks promising because it has an R2 of 87%. However, the predicted R2 is only to 52%, which indicates that the model may be over-fit.

S

S represents the standard deviation of how far the data values fall from the fitted values. S is measured in the units of the response.

Interpretation

Use S to assess how well the model describes the response. S is measured in the units of the response variable and represents the standard deviation of how far the data values fall from the fitted values. The lower the value of S, the better the model describes the response. However, a low S value by itself does not indicate that the model meets the model assumptions. You should check the residual plots to verify the assumptions.

For example, you work for a potato chip company that examines the factors that affect the percentage of crumbled potato chips per container. You reduce the model to the significant predictors, and S is calculated as 1.79. This result indicates that the standard deviation of the data points around the fitted values is 1.79. If you are comparing models, values that are lower than 1.79 indicate a better fit, and higher values indicate a worse fit.

SE Mean

The standard error of the mean (SE Mean) estimates the variability between sample means that you would obtain if you took samples from the same population again and again. Whereas the standard error of the mean estimates the variability between samples, the standard deviation measures the variability within a single sample.

For example, you have a mean delivery time of 3.80 days, with a standard deviation of 1.43 days, from a random sample of 312 delivery times. These numbers yield a standard error of the mean of 0.08 days (1.43 divided by the square root of 312). If you took multiple random samples of the same size, from the same population, the standard deviation of those different sample means would be around 0.08 days.

Interpretation

Use the standard error of the mean to determine how precisely the sample mean estimates the population mean.

A smaller value of the standard error of the mean indicates a more precise estimate of the population mean. Usually, a larger standard deviation results in a larger standard error of the mean and a less precise estimate of the population mean. A larger sample size results in a smaller standard error of the mean and a more precise estimate of the population mean.

Minitab uses the standard error of the mean to calculate the confidence interval, which is a range of values likely to include the population mean.

Std Resid

The standardized residual equals the value of a residual (ei) divided by an estimate of its standard deviation.

Interpretation

Use the standardized residuals to help you detect outliers. Standardized residuals greater than 2 and less than −2 are usually considered large. The Fits and Diagnostics for Unusual Observations table identifies these observations with an 'R'. The observations that Minitab labels do not follow the proposed regression equation well. However, it is expected that you will have some unusual observations. For example, based on the criteria for large standardized residuals, you would expect roughly 5% of your observations to be flagged as having a large standardized residual. For more information, go to Unusual observations.

Standardized residuals are useful because raw residuals might not be good indicators of outliers. The variance of each raw residual can differ by the x-values associated with it. This unequal variation causes it to be difficult to assess the magnitudes of the raw residuals. Standardizing the residuals solves this problem by converting the different variances to a common scale.

Term

The terms provide the factor levels and factor level combinations for the fitted means in the means table.

Interpretation

For terms that represent main effects, the table displays the groups within each factor and their fitted means. For terms that represent interaction effects, the table displays all possible combinations of groups across both factors. Use the p-values in the ANOVA table to determine whether these effects are statistically significant.

By using this site you agree to the use of cookies for analytics and personalized content.  Read our policy