Step 1: Determine whether the association between the response and the term is statistically significant
To determine whether the association between the response and each term in the model is statistically significant, compare the p-value for the term to your significance level to assess the null hypothesis. The null hypothesis is that there is no association between the term and the response. Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that an association exists when there is no actual association.
P-value ≤ α: The association is statistically significant
If the p-value is less than or equal to the significance level, you can conclude that there is a statistically significant association between the response variable and the term.
P-value > α: The association is not statistically significant
If the p-value is greater than the significance level, you cannot conclude that there is a statistically significant association between the response variable and the term. You may want to refit the model without the term.
If there are multiple predictors without a statistically significant association with the response, you can reduce the model by removing terms one at a time. For more information on removing terms from the model, go to Model reduction.
If a model term is statistically significant, the interpretation depends on the type of term. The interpretations are as follows:
If a continuous predictor is significant, you can conclude that the coefficient for the predictor is different from zero.
If a categorical predictor is significant, you can conclude that not all of the levels of the factor have the same probability.
If an interaction term is significant, you can conclude that the relationship between a predictor and the probability of the event depends on the other predictors in the term.
If a polynomial term is significant, you can conclude that the relationship between a predictor and the probability of the event depends on the magnitude of the predictor.
Step 2: Understand the effects of the predictors
Use the odds ratio to understand the effect of a predictor. The interpretation of the odds ratio depends on whether the predictor is categorical or continuous. Minitab calculates odds ratios when the model uses the logit link function.
Odds Ratios for Continuous Predictors
Odds ratios that are greater than 1 indicate that the event is more likely to occur as the predictor increases. Odds ratios that are less than 1 indicate that the event is less likely to occur as the predictor increases.
Odds Ratios for Categorical Predictors
For categorical predictors, the odds ratio compares the odds of the event occurring at 2 different levels of the predictor. Minitab sets up the comparison by listing the levels in 2 columns, Level A and Level B. Level B is the reference level for the factor. Odds ratios that are greater than 1 indicate that the event is less likely at level B. Odds ratios that are less than 1 indicate that the event is more likely at level B. For information on how to select the reference level for the analysis, go to Specify the coding scheme for Fit Binary Logistic Model.
The higher the deviance R2, the better the model fits your data. Deviance R2 is always between 0% and 100%.
Deviance R2 always increases when you add additional predictors to a model. For example, the best 5-predictor model will always have an R2 that is at least as high as the best 4-predictor model. Therefore, deviance R2 is most useful when you compare models of the same size.
For binary logistic regression, the format of the data affects the deviance R2 value. The deviance R2 is usually higher for data in Event/Trial format. Deviance R2 values are comparable only between models that use the same data format.
Deviance R2 is just one measure of how well the model fits the data. Even when a model has a high R2, you should check the residual plots and goodness-of-fit tests to assess how well a model fits the data.
Deviance R-sq (adj)
Use adjusted deviance R2 to compare models that have different numbers of predictors. Deviance R2 always increases when you add a predictor to the model. The adjusted deviance R2 value incorporates the number of predictors in the model to help you choose the correct model.
Use AIC to compare different models. The smaller the AIC, the better the model fits the data. However, the model with the smallest AIC for a set of predictors does not necessarily fit the data well. Also use goodness-of-fit tests and residual plots to assess how well a model fits the data.
Step 4: Determine whether the model does not fit the data
Use the goodness-of-fit tests to determine whether the predicted probabilities deviate from the observed probabilities in a way that the binomial distribution does not predict. If the p-value for the goodness-of-fit test is lower than your chosen significance level, the predicted probabilities deviate from the observed probabilities in a way that the binomial distribution does not predict. This list provides common reasons for the deviation:
Incorrect link function
Omitted higher-order term for variables in the model
Omitted predictor that is not in the model
If the deviation is statistically significant, you can try a different link function or change the terms in the model.
For binary logistic regression, the format of the data affects the p-value because it changes the number of trials per row.
Deviance: The p-value for the deviance test tends to be lower for data that are in the Binary Response/Frequency format compared to data in the Event/Trial format. For data in Binary Response/Frequency format, the Hosmer-Lemeshow results are more trustworthy.
Pearson: The approximation to the chi-square distribution that the Pearson test uses is inaccurate when the expected number of events per row in the data is small. Thus, the Pearson goodness-of-fit test is inaccurate when the data are in Binary Response/Frequency format.
Hosmer-Lemeshow: The Hosmer-Lemeshow test does not depend on the number of trials per row in the data as the other goodness-of-fit tests do.When the data have few trials per row, the Hosmer-Lemeshow test is a more trustworthy indicator of how well the model fits the data.