An effect describes the size and direction of the relationship between a term and the response variable. Minitab calculates effects for factors and interactions among factors.
The effect for a factor represents the predicted change in the mean response when the factor changes from the low level to the high level. Effects are twice the value of the coded coefficients. The sign of the effect indicates the direction of the relationship between the term and the response.
With more factors in an interaction, you have more difficulty interpreting the effect. For factors and interactions among factors, the size of the effect is usually a good way to assess the practical significance of the effect that a term has on the response variable.
The size of the effect does not indicate whether a term is statistically significant because the calculations for significance also consider the variation in the response data. To determine statistical significance, examine the p-value for the term.
The coefficient describes the size and direction of the relationship between a term in the model and the response variable. To minimize multicollinearity among the terms, the coefficients are all in coded units.
The coefficient for a term represents the change in the mean response associated with an increase of one coded unit in that term, while the other terms are held constant. The sign of the coefficient indicates the direction of the relationship between the term and the response.
The size of the coefficient is half the size of the effect. The effect represents the change in the predicted mean response when a factor changes from its low level to its high level.
The size of the effect is usually a good way to assess the practical significance of the effect that a term has on the response variable. The size of the effect does not indicate whether a term is statistically significant because the calculations for significance also consider the variation in the response data. To determine statistical significance, examine the p-value for the term.
The standard error of the coefficient estimates the variability between coefficient estimates that you would obtain if you took samples from the same population again and again. The calculation assumes that the experimental design and the coefficients to estimate would remain the same if you sampled again and again.
Use the standard error of the coefficient to measure the precision of the estimate of the coefficient. The smaller the standard error, the more precise the estimate. Dividing the coefficient by its standard error calculates a t-value. If the p-value associated with this t-statistic is less than your significance level, you conclude that the coefficient is statistically significant.
These confidence intervals (CI) are ranges of values that are likely to contain the true value of the coefficient for each term in the model.
Because samples are random, two samples from a population are unlikely to yield identical confidence intervals. However, if you take many random samples, a certain percentage of the resulting confidence intervals contain the unknown population parameter. The percentage of these confidence intervals that contain the parameter is the confidence level of the interval.
Use the confidence interval to assess the estimate of the population coefficient for each term in the model.
For example, with a 95% confidence level, you can be 95% confident that the confidence interval contains the value of the coefficient for the population. The confidence interval helps you assess the practical significance of your results. Use your specialized knowledge to determine whether the confidence interval includes values that have practical significance for your situation. If the interval is too wide to be useful, consider increasing your sample size.
The t-value measures the ratio between the coefficient and its standard error.
Minitab uses the t-value to calculate the p-value, which you use to test whether the coefficient is significantly different from 0.
You can use the t-value to determine whether to reject the null hypothesis. However, the p-value is used more often because the threshold for the rejection of the null hypothesis does not depend on the degrees of freedom. For more information on using the t-value, go to Using the t-value to determine whether to reject the null hypothesis.
The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.
To determine whether a coefficient is statistically different from 0, compare the p-value for the term to your significance level to assess the null hypothesis. The null hypothesis is that the coefficient equals 0, which implies that there is no association between the term and the response.
Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that the coefficient is not 0 when it is.
The variance inflation factor (VIF) indicates how much the variance of a coefficient is inflated due to correlations among the predictors in the model.
Use the VIF to describe how much multicollinearity (which is correlation between predictors) exists in a model. All the VIF values are 1 in most factorial designs, which indicates the predictors have no multicollinearity. The absence of multicollinearity simplifies the determination of statistical significance. The inclusion of covariates in the model and the occurrence of botched runs during data collection are two common ways that VIF values increase, which complicates the interpretation of statistical significance. Also for binary responses, the VIF values are often greater than 1.
VIF | Status of predictor |
---|---|
VIF = 1 | Not correlated |
1 < VIF < 5 | Moderately correlated |
VIF > 5 | Highly correlated |
Be cautious when you use statistical significance to choose terms to remove from a model in the presence of multicollinearity. Add and remove only one term at a time from the model. Monitor changes in the model summary statistics, as well as the tests of statistical significance, as you change the model.