# Interpret the key results for Nominal Logistic Regression

Complete the following steps to interpret a nominal logistic regression model. Key output includes the p-value, the coefficients, and the log-likelihood.

## Step 1: Determine whether the association between the response and the terms is statistically significant

To determine whether the association between the response and each term in the model is statistically significant, compare the p-value for the term to your significance level to assess the null hypothesis. The null hypothesis is that there is no association between the term and the response. Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that an association exists when there is no actual association.
P-value ≤ α: The association is statistically significant
If the p-value is less than or equal to the significance level, you can conclude that there is a statistically significant association between the response variable and the term.
P-value > α: The association is not statistically significant
If the p-value is greater than the significance level, you cannot conclude that there is a statistically significant association between the response variable and the term. You may want to refit the model without the term.
If there are multiple predictors without a statistically significant association with the response, you can reduce the model by removing terms one at a time. For more information on removing terms from the model, go to Model reduction.

For a categorical factor with more than 2 levels, the hypothesis for the coefficient is about whether that level of the factor is different from the reference level for the factor. To assess the statistical significance of the factor, use the test for terms with more than 1 degree of freedom. For more information on how to display this test, go to Select the results to display for Nominal Logistic Regression.

## Step 2: Determine how well the model fits your data

To determine how well the model fits the data, examine the log-likelihood. Larger values of the log-likelihood indicate a better fit to the data. Because log-likelihood values are negative, the closer to 0, the larger the value. The log-likelihood depends on the sample data, so you cannot use the log-likelihood to compare models from different data sets.

The log-likelihood cannot decrease when you add terms to a model. For example, a model with 5 terms has higher log-likelihood than any of the 4-term models you can make with the same terms. Therefore, log-likelihood is most useful when you compare models of the same size. To make decisions about individual terms, you usually look at the p-values for the term in the different logits.

By using this site you agree to the use of cookies for analytics and personalized content.  Read our policy