Interpret the key results for Fit General Linear Model

Complete the following steps to interpret a general linear model. Key output includes the p-value, the coefficients, R2, and the residual plots.

Step 1: Determine whether the association between the response and the term is statistically significant

To determine whether the association between the response and each term in the model is statistically significant, compare the p-value for the term to your significance level to assess the null hypothesis. The null hypothesis is that there is no association between the term and the response. Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that an association exists when there is no actual association.
P-value ≤ α: The association is statistically significant
If the p-value is less than or equal to the significance level, you can conclude that there is a statistically significant association between the response variable and the term.
P-value > α: The association is not statistically significant
If the p-value is greater than the significance level, you cannot conclude that there is a statistically significant association between the response variable and the term. You may want to refit the model without the term.
If there are multiple predictors without a statistically significant association with the response, you can reduce the model by removing terms one at a time. For more information on removing terms from the model, go to Model reduction.
If a model term is statistically significant, the interpretation depends on the type of term. The interpretations are as follows:
  • If a fixed factor is significant, you can conclude that not all the level means are equal.
  • If a random factor is significant, you can conclude that the factor contributes to the amount of variation in the response.
  • If an interaction term is significant, the relationship between a factor and the response depends on the other factors in the term. In this case, you should not interpret the main effects without considering the interaction effect.
  • If a covariate is statistically significant, you can conclude that changes in the value of the covariate are associated with changes in the mean response value.
  • If a polynomial term is significant, you can conclude that the data contain curvature.

Coefficients

TermCoefSE CoefT-ValueP-ValueVIF
Constant-4969191-25.970.000 
Temperature83.873.1326.820.000301.00
GlassType         
  113232714.890.0003604.00
  215542715.740.0003604.00
Temperature*Temperature-0.28520.0125-22.830.000301.00
Temperature*GlassType         
  1-24.404.42-5.520.00015451.33
  2-27.874.42-6.300.00015451.33
Temperature*Temperature*GlassType         
  10.11240.01776.360.0004354.00
  20.12200.01776.910.0004354.00
Key Results: P-Value, Coefficients

In these results, the main effects for glass type and temperature are statistically significant at the significance level of 0.05. You can conclude that changes in these variables are associated with changes in the response variable.

Of the three types of glass in the experiment, the output displays the coefficients for two types. By default, Minitab removes one factor level to avoid perfect multicollinearity. Because the analysis uses the −1, 0, +1 coding scheme, the coefficients for the main effects represent the difference between each level mean and the overall mean. For example, glass type 1 is associated with light output that is 1323 units greater than the overall mean.

Temperature is a covariate in this model. The coefficient for the main effect represents the change in the mean response for a one-unit increase in the covariate, while the other terms in the model are held constant. For each one-degree increase in temperature, the mean light output increases by 83.87 units.

Both glass type and temperature are included in high-order terms that are statistically significant.

The two-way and three-way interaction terms for glass type and temperature are statistically significant. These interactions indicate that the relationship between each variable and the response depends on the value of the other variable. For example, the effect of glass type on light output depends on the temperature.

The polynomial term, Temperature*Temperature, indicates that the curvature in the relationship between temperature and light output is statistically significant.

You should not interpret the main effects without considering the interaction effects and curvature. To obtain a better understanding of the main effects, interaction effects, and curvature in your model, go to Factorial Plots and Response Optimizer.

Step 2: Determine how well the model fits your data

To determine how well the model fits your data, examine the goodness-of-fit statistics in the Model Summary table.

S

Use S to assess how well the model describes the response. Use S instead of the R2 statistics to compare the fit of models that have no constant.

S is measured in the units of the response variable and represents how far the data values fall from the fitted values. The lower the value of S, the better the model describes the response. However, a low S value by itself does not indicate that the model meets the model assumptions. You should check the residual plots to verify the assumptions.

R-sq

The higher the R2 value, the better the model fits your data. R2 is always between 0% and 100%.

R2 always increases when you add additional predictors to a model. For example, the best five-predictor model will always have an R2 that is at least as high as the best four-predictor model. Therefore, R2 is most useful when you compare models of the same size.

R-sq (adj)

Use adjusted R2 when you want to compare models that have different numbers of predictors. R2 always increases when you add a predictor to the model, even when there is no real improvement to the model. The adjusted R2 value incorporates the number of predictors in the model to help you choose the correct model.

R-sq (pred)

Use predicted R2 to determine how well your model predicts the response for new observations. Models that have larger predicted R2 values have better predictive ability.

A predicted R2 that is substantially less than R2 may indicate that the model is over-fit. An over-fit model occurs when you add terms for effects that are not important in the population. The model becomes tailored to the sample data and, therefore, may not be useful for making predictions about the population.

Predicted R2 can also be more useful than adjusted R2 for comparing models because it is calculated with observations that are not included in the model calculation.

AICc and BIC
When you show the details for each step of a stepwise method or when you show the expanded results of the analysis, Minitab shows two more statistics. These statistics are the corrected Akaike’s Information Criterion (AICc) and the Bayesian Information Criterion (BIC). Use these statistics to compare different models. For each statistic, smaller values are desirable. Minitab does not show these statistics or perform stepwise methods when the data include random factors.
Consider the following points when you interpret the goodness-of-fit statistics:
  • Small samples do not provide a precise estimate of the strength of the relationship between the response and predictors. For example, if you need R2 to be more precise, you should use a larger sample (typically, 40 or more).

  • Goodness-of-fit statistics are just one measure of how well the model fits the data. Even when a model has a desirable value, you should check the residual plots to verify that the model meets the model assumptions.

Model Summary

SR-sqR-sq(adj)R-sq(pred)
19.118599.73%99.61%99.39%
Key Results: S, R-sq, R-sq (adj), R-sq (pred)

In these results, the model explains 99.73% of the variation in the light output of the face-plate glass samples. For these data, the R2 value indicates the model provides a good fit to the data. If additional models are fit with different predictors, use the adjusted R2 values and the predicted R2 values to compare how well the models fit the data.

Step 3: Determine whether your model meets the assumptions of the analysis

Use the residual plots to help you determine whether the model is adequate and meets the assumptions of the analysis. If the assumptions are not met, the model may not fit the data well and you should use caution when you interpret the results.

For more information on how to handle patterns in the residual plots, go to Residual plots for Fit General Linear Model and click the name of the residual plot in the list at the top of the page.

Residuals versus fits plot

Use the residuals versus fits plot to verify the assumption that the residuals are randomly distributed and have constant variance. Ideally, the points should fall randomly on both sides of 0, with no recognizable patterns in the points.

The patterns in the following table may indicate that the model does not meet the model assumptions.
Pattern What the pattern may indicate
Fanning or uneven spreading of residuals across fitted values Nonconstant variance
Curvilinear A missing higher-order term
A point that is far away from zero An outlier
A point that is far away from the other points in the x-direction An influential point
In this residuals versus fits plot, the data appear to be randomly distributed about zero. There is no evidence that the value of the residual depends on the fitted value.

Residuals versus order plot

Use the residuals versus order plot to verify the assumption that the residuals are independent from one another. Independent residuals show no trends or patterns when displayed in time order. Patterns in the points may indicate that residuals near each other may be correlated, and thus, not independent. Ideally, the residuals on the plot should fall randomly around the center line:
If you see a pattern, investigate the cause. The following types of patterns may indicate that the residuals are dependent.
Trend
Shift
Cycle
In this residuals versus order plot, the residuals appear to fall randomly around the centerline. There is no evidence that the residuals are not independent.

Normal probability plot of the residuals

Use the normal probability plot of the residuals to verify the assumption that the residuals are normally distributed. The normal probability plot of the residuals should approximately follow a straight line.

The patterns in the following table may indicate that the model does not meet the model assumptions.
Pattern What the pattern may indicate
Not a straight line Nonnormality
A point that is far away from the line An outlier
Changing slope An unidentified variable
In this normal probability plot, the points generally follow a straight line. There is no evidence of nonnormality, outliers, or unidentified variables.