Adjusted mean squares measure how much variation a term or a model explains, assuming that all other terms are in the model, regardless of the order they were entered. Unlike the adjusted sums of squares, the adjusted mean squares consider the degrees of freedom.
The adjusted mean square of the error (also called MSE or s^{2}) is the variance around the fitted values.
Minitab uses the adjusted mean squares to calculate the pvalue for a term. Minitab also uses the adjusted mean squares to calculate the adjusted R^{2} statistic. Usually, you interpret the pvalues and the adjusted R^{2} statistic instead of the adjusted mean squares.
The adjusted pvalue indicates which pairs within a family of comparisons are significantly different. The adjustment limits the family error rate to the alpha level that you specify. If you use a regular pvalue for multiple comparisons, the family error rate increases with each additional comparison.
It is important to consider the family error rate when making multiple comparisons, because your chances of committing a type I error for a series of comparisons is greater than the error rate for any one comparison alone.
If the adjusted pvalue is less than alpha, reject the null hypothesis and conclude that the difference between a pair of group means is statistically significant. The adjusted pvalue also represents the smallest family error rate at which a particular null hypothesis is rejected.
Adjusted sums of squares are measures of variation for different components of the model. The order of the predictors in the model does not affect the calculation of the adjusted sum of squares. In the Analysis of Variance table, Minitab separates the sums of squares into different components that describe the variation due to different sources.
Minitab uses the adjusted sums of squares to calculate the pvalue for a term. Minitab also uses the sums of squares to calculate the R^{2} statistic. Usually, you interpret the pvalues and the R^{2} statistic instead of the sums of squares.
A boxplot provides a graphical summary of the distribution of each sample. The boxplot makes it easy to compare the shape, the central tendency, and the variability of the samples.
Use a boxplot to examine the spread of the data and to identify any potential outliers. Boxplots are best when the sample size is greater than 20.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Skewed data indicates that the data might not be normally distributed. Often, skewness is easiest to detect with an individual value plot, a histogram, or a boxplot.
Data that are severely skewed can affect the validity of the pvalue if your sample is small (< 20 values). If your data are severely skewed and you have a small sample, consider increasing your sample size.
Outliers, which are data values that are far away from other data values, can strongly affect your results. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any dataentry errors or measurement errors. Consider removing data values for abnormal, onetime events (special causes). Then, repeat the analysis.
These confidence intervals (CI) are ranges of values that are likely to contain the true mean of each population. The confidence intervals are calculated using the pooled standard deviation.
Because samples are random, two samples from a population are unlikely to yield identical confidence intervals. But, if you repeat your sample many times, a certain percentage of the resulting confidence intervals contain the unknown population parameter. The percentage of these confidence intervals that contain the parameter is the confidence level of the interval.
Use the confidence interval to assess the estimate of the population mean for each group.
For example, with a 95% confidence level, you can be 95% confident that the confidence interval contains the group mean. The confidence interval helps you assess the practical significance of your results. Use your specialized knowledge to determine whether the confidence interval includes values that have practical significance for your situation. If the interval is too wide to be useful, consider increasing your sample size.
 
 

In these results, each blend has a confidence interval for its mean hardness. The multiple comparison results for these data show that Blend 4 is significantly harder than Blend 2. That Blend 4 is harder than Blend 2 does not show that Blend 4 is hard enough for the intended use of the paint. The confidence interval for the group mean is better for judging whether Blend 4 is hard enough.
The total degrees of freedom (DF) are the amount of information in your data. The analysis uses that information to estimate the values of unknown population parameters. The total DF is determined by the number of observations in your sample. The DF for a term show how much information that term uses. Increasing your sample size provides more information about the population, which increases the total DF. Increasing the number of terms in your model uses more information, which decreases the DF available to estimate the variability of the parameter estimates.
If two conditions are met, then Minitab partitions the DF for error. The first condition is that there must be terms you can fit with the data that are not included in the current model. For example, if you have a continuous predictor with 3 or more distinct values, you can estimate a quadratic term for that predictor. If the model does not include the quadratic term, then a term that the data can fit is not included in the model and this condition is met.
The second condition is that the data contain replicates. Replicates are observations where each predictor has the same value. For example, if you have 3 observations where pressure is 5 and temperature is 25, then those 3 observations are replicates.
If the two conditions are met, then the two parts of the DF for error are lackoffit and pure error. The DF for lackoffit allow a test of whether the model form is adequate. The lackoffit test uses the degrees of freedom for lackoffit. The more DF for pure error, the greater the power of the lackoffit test.
This value is the difference between the sample means of two groups.
The differences between the sample means of the groups are estimates of the differences between the populations of these groups.
Because each mean difference is based on data from a sample and not from the entire population, you cannot be certain that it equals the population difference. To better understand the differences between population means, use the confidence intervals.
Minitab assumes that the population standard deviations for all groups are equal.
Look in the standard deviation (StDev) column of the oneway ANOVA output to determine whether the standard deviations are approximately equal.
If you cannot assume equal variances, use Welch's ANOVA, which is an option for oneway ANOVA that is available in Minitab Statistical Software.
Use the individual confidence intervals to identify statistically significant differences between the group means, to determine likely ranges for the differences, and to determine whether the differences are practically significant. Fisher's individual tests table displays a set of confidence intervals for the difference between pairs of means.
The individual confidence level is the percentage of times that a single confidence interval includes the true difference between one pair of group means, if you repeat the study. Individual confidence intervals are available only for Fisher's method. All of the other comparison methods produce simultaneous confidence intervals.
Controlling the individual confidence level is uncommon because it does not control the simultaneous confidence level, which often increases to unacceptable levels. If you do not control the simultaneous confidence level, the chance that at least one confidence interval does not contain the true difference increases with the number of comparisons.
The confidence interval of the difference is composed of the following two parts:
Use the confidence intervals to assess the differences between group means.
 
 

Minitab uses the Fvalue to calculate the pvalue, which you use to make a decision about the statistical significance of the terms and model. The pvalue is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.
A sufficiently large Fvalue indicates that the term or model is significant.
If you want to use the Fvalue to determine whether to reject the null hypothesis, compare the Fvalue to your critical value. You can calculate the critical value in Minitab or find the critical value from an Fdistribution table in most statistics books. For more information on using Minitab to calculate the critical value, go to Using the inverse cumulative distribution function (ICDF) and click "Use the ICDF to calculate critical values".
Use the grouping information table to quickly determine whether the mean difference between any pair of groups is statistically significant.
The grouping column of the Grouping Information table contains columns of letters that group the factor levels. Groups that do not share a letter have a mean difference that is statistically significant.
If the grouping table identifies differences that are statistically significant, use the confidence intervals of the differences to determine whether the differences are practically significant.
 
 

In these results, the table shows that group A contains Blends 1, 3, and 4, and group B contains Blends 1, 2, and 3. Blends 1 and 3 are in both groups. Differences between means that share a letter are not statistically significant. Blends 2 and 4 do not share a letter, which indicates that Blend 4 has a significantly higher mean than Blend 2.
The histogram of the residuals shows the distribution of the residuals for all observations.
Pattern  What the pattern may indicate 

A long tail in one direction  Skewness 
A bar that is far away from the other bars  An outlier 
Because the appearance of a histogram depends on the number of intervals used to group the data, don't use a histogram to assess the normality of the residuals. Instead, use a normal probability plot.
A histogram is most effective when you have approximately 20 or more data points. If the sample is too small, then each bar on the histogram does not contain enough data points to reliably show skewness or outliers.
An individual value plot displays the individual values in each sample. The individual value plot makes it easy to compare the samples. Each circle represents one observation. An individual value plot is especially useful when your sample size is small.
Use an individual value plot to examine the spread of the data and to identify any potential outliers. Individual value plots are best when the sample size is less than 50.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Skewed data indicate that the data might not be normally distributed. Often, skewness is easiest to detect with an individual value plot, a histogram, or a boxplot.
Outliers, which are data values that are far away from other data values, can strongly affect your results. Often, outliers are easy to identify on an individual value plot.
Try to identify the cause of any outliers. Correct any dataentry errors or measurement errors. Consider removing data values for abnormal, onetime events (special causes). Then, repeat the analysis.
Use the interval plot to display the mean and confidence interval for each group.
Interpret these intervals carefully because your rate of type I error increases when you make multiple comparisons. That is, the more comparisons you make, the higher the probability that at least one comparison will incorrectly conclude that one of the observed differences is significantly different.
In these results, Blend 2 has the lowest mean and Blend 4 has the highest. You cannot determine from this graph whether any differences are statistically significant. To determine statistical significance, assess the confidence intervals for the differences of means.
Use the confidence intervals to determine likely ranges for the differences and to assess the practical significance of the differences. The graph displays a set of confidence intervals for the difference between pairs of means. Confidence intervals that do not contain zero indicate a mean difference that is statistically significant.
Depending on the comparison method you chose, the plot compares different pairs of groups and displays one of the following types of confidence intervals.
Individual confidence level
The percentage of times that a single confidence interval would include the true difference between one pair of group means if the study were repeated multiple times.
Simultaneous confidence level
The percentage of times that a set of confidence intervals would include the true differences for all group comparisons if the study were repeated multiple times.
Controlling the simultaneous confidence level is particularly important when you perform multiple comparisons. If you do not control the simultaneous confidence level, the chance that at least one confidence interval does not contain the true difference increases with the number of comparisons.
 
 

The mean of the observations within each group. The mean describes each group with a single value identifying the center of the data. It is the sum of all the observations with a group divided by the number of observations in that group.
The mean of each sample provides an estimate of each population mean. The differences between sample means are the estimates of the difference between the population means.
Because the difference between the group means are based on data from a sample and not the entire population, you cannot be certain it equals the population difference. To get a better sense of the population difference, you can use the confidence interval.
The sample size (N) is the total number of observations in each group.
The sample size affects the confidence interval and the power of the test.
Usually, a larger sample yields a narrower confidence interval. A larger sample size also gives the test more power to detect a difference. For more information, go to What is power?.
The normal plot of the residuals displays the residuals versus their expected values when the distribution is normal.
Use the normal probability plot of residuals to verify the assumption that the residuals are normally distributed. The normal probability plot of the residuals should approximately follow a straight line.
If your oneway ANOVA design meets the guidelines for sample size, the results are not substantially affected by departures from normality.
If you see a nonnormal pattern, use the other residual plots to check for other problems with the model, such as missing terms or a time order effect. If the residuals do not follow a normal distribution and the data do not meet the sample size guidelines, the confidence intervals and pvalues can be inaccurate.
Oneway ANOVA is a hypothesis test that evaluates two mutually exclusive statements about two or more population means. These two statements are called the null hypothesis and the alternative hypotheses. A hypothesis test uses sample data to determine whether to reject the null hypothesis.
Compare the pvalue to the significance level to determine whether to reject the null hypothesis.
The pooled standard deviation is an estimate of the common standard deviation for all levels. The pooled standard deviation is the standard deviation of all data points around their group mean (not around the overall mean). Larger groups have a proportionally greater influence on the overall estimate of the pooled standard deviation.
A higher standard deviation value indicates greater spread in the data. A higher value produces less precise (wider) confidence intervals and low statistical power.
Minitab uses the pooled standard deviation to create the confidence intervals for both the group means and the differences between group means.
Group  Mean  Standard Deviation  N 

1  9.7  2.5  50 
2  12.1  2.9  50 
3  14.5  3.2  50 
4  17.3  6.8  200 
The first three groups are equal in size (n=50) with standard deviations around 3. The fourth group is much larger (n=200) and has a higher standard deviation (6.8). Because the pooled standard deviation uses a weighted average, its value (5.488) is closer to the standard deviation of the largest group.
The pvalue is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.
Use the pvalue in the ANOVA output to determine whether the differences between some of the means are statistically significant.
The residuals versus fits graph plots the residuals on the yaxis and the fitted values on the xaxis.
Use the residuals versus fits plot to verify the assumption that the residuals are randomly distributed and have constant variance. Ideally, the points should fall randomly on both sides of 0, with no recognizable patterns in the points.
Pattern  What the pattern may indicate 

Fanning or uneven spreading of residuals across fitted values  Nonconstant variance 
A point that is far away from zero  An outlier 
If you identify any patterns or outliers in your residual versus fits plot, consider the following solutions:
Issue  Possible solution 

Nonconstant variance  Consider using Options, uncheck Assume equal variances.  in Minitab Statistical Software. Under
An outlier or influential point 

The residual versus order plot displays the residuals in the order that the data were collected.
The residual versus variables plot displays the residuals versus another variable. The variable could already be included in your model. Or, the variable may not be in the model, but you suspect it affects the response.
If you see a nonrandom pattern in the residuals, it indicates that the variable affects the response in a systematic way. Consider including this variable in an analysis.
R^{2} is the percentage of variation in the response that is explained by the model. It is calculated as 1 minus the ratio of the error sum of squares (which is the variation that is not explained by model) to the total sum of squares (which is the total variation in the model).
Use R^{2} to determine how well the model fits your data. The higher the R^{2} value, the better the model fits your data. R^{2} is always between 0% and 100%.
R^{2} always increases when you add additional predictors to a model. For example, the best fivepredictor model will always have an R^{2} that is at least as high the best fourpredictor model. Therefore, R^{2} is most useful when you compare models of the same size.
Small samples do not provide a precise estimate of the strength of the relationship between the response and predictors. If you need R^{2} to be more precise, you should use a larger sample (typically, 40 or more).
R^{2} is just one measure of how well the model fits the data. Even when a model has a high R^{2}, you should check the residual plots to verify that the model meets the model assumptions.
Adjusted R^{2} is the percentage of the variation in the response that is explained by the model, adjusted for the number of predictors in the model relative to the number of observations. Adjusted R^{2} is calculated as 1 minus the ratio of the mean square error (MSE) to the mean square total (MS Total).
Use adjusted R^{2} when you want to compare models that have different numbers of predictors. R^{2} always increases when you add a predictor to the model, even when there is no real improvement to the model. The adjusted R^{2} value incorporates the number of predictors in the model to help you choose the correct model.
Step  % Potato  Cooling rate  Cooking temp  R^{2}  Adjusted R^{2}  Pvalue 

1  X  52%  51%  0.000  
2  X  X  63%  62%  0.000  
3  X  X  X  65%  62%  0.000 
The first step yields a statistically significant regression model. The second step adds cooling rate to the model. Adjusted R^{2} increases, which indicates that cooling rate improves the model. The third step, which adds cooking temperature to the model, increases the R^{2} but not the adjusted R^{2}. These results indicate that cooking temperature does not improve the model. Based on these results, you consider removing cooking temperature from the model.
Predicted R^{2} is calculated with a formula that is equivalent to systematically removing each observation from the data set, estimating the regression equation, and determining how well the model predicts the removed observation. The value of predicted R^{2} ranges between 0% and 100%.
Use predicted R^{2} to determine how well your model predicts the response for new observations. Models that have larger predicted R^{2} values have better predictive ability.
A predicted R^{2} that is substantially less than R^{2} may indicate that the model is overfit. An overfit model occurs when you add terms for effects that are not important in the population, although they may appear important in the sample data. The model becomes tailored to the sample data and therefore, may not be useful for making predictions about the population.
Predicted R^{2} can also be more useful than adjusted R^{2} for comparing models because it is calculated with observations that are not included in the model calculation.
For example, an analyst at a financial consulting company develops a model to predict future market conditions. The model looks promising because it has an R^{2} of 87%. However, the predicted R^{2} is only to 52%, which indicates that the model may be overfit.
S represents the standard deviation of how far the data values fall from the fitted values. S is measured in the units of the response.
Use S to assess how well the model describes the response. S is measured in the units of the response variable and represents the standard deviation of how far the data values fall from the fitted values. The lower the value of S, the better the model describes the response. However, a low S value by itself does not indicate that the model meets the model assumptions. You should check the residual plots to verify the assumptions.
For example, you work for a potato chip company that examines the factors that affect the percentage of crumbled potato chips per container. You reduce the model to the significant predictors, and S is calculated as 1.79. This result indicates that the standard deviation of the data points around the fitted values is 1.79. If you are comparing models, values that are lower than 1.79 indicate a better fit, and higher values indicate a worse fit.
The standard error of the difference between means (SE of Difference) estimates the variability of the difference between sample means that you would obtain if you took repeated samples from the same populations.
Use the standard error of the difference between means to determine how precisely the differences between the sample means estimate the differences between the population means. A lower standard error value indicates a more precise estimate.
Minitab uses the standard error of the difference to calculate the confidence intervals of the differences between means, which is a range of values that is likely to include the population differences.
The significance level (denoted by alpha or α) is the maximum acceptable level of risk for rejecting the null hypothesis when the null hypothesis is true (type I error).
Use the significance level to decide whether to reject or fail to reject the null hypothesis (H_{0}). When the pvalue is less than the significance level, the usual interpretation is that the results are statistically significant, and you reject H_{0}.
For oneway ANOVA, you reject the null hypothesis when there is sufficient evidence to conclude that not all of the means are equal.
Use the simultaneous confidence intervals to identify mean differences that are statistically significant, to determine likely ranges for the differences, and to assess the practical significance of the differences. The table displays a set of confidence intervals for the difference between pairs of means. Confidence intervals that do not contain zero indicate a mean difference that is statistically significant.
The simultaneous confidence level is the percentage of times that a set of confidence intervals includes the true differences for all group comparisons if the study were repeated multiple times.
Controlling the simultaneous confidence level is particularly important when you perform multiple comparisons. If you do not control the simultaneous confidence level, the chance that at least one confidence interval does not contain the true difference increases with the number of comparisons.
The confidence interval of the difference is composed of the following two parts:
Use the confidence intervals to assess the differences between group means.
 
 

The standard deviation is the most common measure of dispersion, or how spread out the data are around the mean. The symbol σ (sigma) is often used to represent the standard deviation of a population. The symbol s is used to represent the standard deviation of a sample.
The sample standard deviation of a group is an estimate of the population standard deviation of that group. The standard deviations are used to calculate the confidence intervals and the pvalues. Larger sample standard deviations result in less precise (wider) confidence intervals and lower statistical power.
Analysis of variance assumes that the population standard deviations for all levels are equal. If you cannot assume equal variances, use Welch's ANOVA, which is an option for OneWay ANOVA that is available in Minitab Statistical Software.
The tvalue in oneway ANOVA is a test statistic that measures the ratio between the difference in means and the standard error of the difference.
You can use the tvalue to determine whether to reject the null hypothesis, which states that the difference in means is 0. However, the pvalue is used more often because it is easier to interpret. For more information on using the critical value, go to Using the tvalue to determine whether to reject the null hypothesis.
Minitab uses the tvalue to calculate the pvalue.