Step 1: Determine whether the association between the response and the predictor is statistically significant
To determine whether the association between the response variable and the predictor variable in the model is statistically significant, compare the p-value for the predictor to your significance level to assess the null hypothesis. The null hypothesis is that the predictor's coefficient is equal to zero, which indicates that there is no association between the predictor and the response. Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that an association exists when there is no actual association.
P-value ≤ α: The association is statistically significant
If the p-value is less than or equal to the significance level, you can conclude that there is a statistically significant association between the response variable and the predictor.
P-value > α: The association is not statistically significant
If the p-value is greater than the significance level, you cannot conclude that there is a statistically significant association between the response variable and the predictor.
Key Result: P-Value
In these results, the p-value for dose is 0.000, which is less than the significance level of 0.05. These results indicate that the association between the dose and the presence of bacteria at the end of treatment is statistically significant.
Step 2: Understand the effects of the predictor
Use the odds ratio to understand the effect of a predictor. Odds ratios that are greater than 1 indicate that the even is more likely to occur as the predictor increases. Odds ratios that are less than 1 indicate that the event is less likely to occur as the predictor increases.
Odds Ratios for Continuous Predictor
Key Result: Odds Ratio
In these results, the model uses the dosage level of a medicine to predict the presence of absence of bacteria in adults. The odds ratio is approximately 38, which indicates that for every 1 mg increase in the dosage level, the likelihood that no bacteria is present increases by approximately 38 times.
Use the fitted line plot to examine the relationship between the response variable and the predictor variable.
Step 3: Determine how well the model fits your data
The higher the deviance R2, the better the model fits your data. Deviance R2 is always between 0% and 100%.
Deviance R2 always increases when you add additional predictors to a model. For example, the best 5-predictor model will always have an R2 that is at least as high as the best 4-predictor model. Therefore, deviance R2 is most useful when you compare models of the same size.
For binary logistic regression, the format of the data affects the deviance R2 value. The deviance R2 is usually higher for data in Event/Trial format. Deviance R2 values are comparable only between models that use the same data format.
Deviance R2 is just one measure of how well the model fits the data. Even when a model has a high R2, you should check the residual plots to assess how well the model fits the data.
Deviance R-sq (adj)
Use adjusted deviance R2 to compare models that have different numbers of predictors. Deviance R2 always increases when you add a predictor to the model. The adjusted deviance R2 value incorporates the number of predictors in the model to help you choose the correct model.
Use AIC to compare different models. The smaller the AIC, the better the model fits the data. However, the model with the smallest AIC does not necessarily fit the data well. Also use the residual plots to assess how well the model fits the data.
In these results, the model explains 96.04% of the deviance in the response variable. For these data, the Deviance R2 value indicates the model provides a good fit to the data. If additional models are fit with different predictors, use the adjusted Deviance R2 value and the AIC value to compare how well the models fit the data.
Step 4: Determine whether your model meets the assumptions of the analysis
Use the residual plots to help you determine whether the model is adequate and meets the assumptions of the analysis. If the assumptions are not met, the model may not fit the data well and you should use caution when you interpret the results.
For more information on how to handle patterns in the residual plots, go to and click the name of the residual plot in the list at the top of the page.
Residuals versus fits plot
Use the residuals versus fits plot to verify the assumption that the residuals are randomly distributed. Ideally, the points should fall randomly on both sides of 0, with no recognizable patterns in the points.
The residuals versus fits plot is only available when the data are in Event/Trial format.
The patterns in the following table may indicate that the model does not meet the model assumptions.
What the pattern may indicate
Fanning or uneven spreading of residuals across fitted values
An inappropriate link function
A missing higher-order term or an inappropriate link function
A point that is far away from zero
A point that is far away from the other points in the x-direction
An influential point
If the pattern indicates that you should fit the model with a different link function, you should use Binary Fitted Line Plot in Minitab Statistical Software.
In this residuals versus fits plot, the data appear to be randomly distributed about zero. There is no evidence that the value of the residual depends on the fitted value.
Residuals versus order plot
Use the residuals versus order plot to verify the assumption that the residuals are independent from one another. Independent residuals show no trends or patterns when displayed in time order. Patterns in the points may indicate that residuals near each other may be correlated, and thus, not independent. Ideally, the residuals on the plot should fall randomly around the center line:
If you see a pattern, investigate the cause. The following types of patterns may indicate that the residuals are dependent.
In this residuals versus order plot, the residuals appear to fall randomly around the centerline. There is no evidence that the residuals are not independent.