Minitab displays results for both the training and test data set. The test results indicate whether the model can adequately predict the response values for new observations, or properly summarize the relationships between the response and the predictor variables. The training results are usually more ideal than actual and are for reference only.
Click Select an Alternative Tree to open an interactive plot that includes a table of model summary statistics. Use the plot to investigate smaller trees with similar performance.
Typically, a tree with fewer terminal nodes gives a clearer picture of how each predictor variable affects the response values. A smaller tree also makes it easier to identify a few target groups for further studies. If the difference in prediction accuracy for a smaller tree is negligible, you can use the smaller tree to evaluate the relationships between the response and the predictor variables.
The number of total predictors available for the classification tree. This is the sum of the continuous and categorical predictors that you specify.
The number of important predictors in the classification tree. Important predictors are the variables that are used as primary or surrogate splitters.
You can use the Relative Variable Importance plot to display the order of relative variable importance. For instance, suppose 10 of 20 predictors are important in the classification tree, the Relative Variable Importance plot displays the variables in importance order.
A terminal node is a final node that cannot be split further.
Terminal nodes are the final purer groups identified using the classification tree method. You can use terminal node information to make predictions.
The minimum terminal node size is the terminal node with the smallest number of cases.
By default, Minitab sets the minimum number of cases allowed for a terminal node as 3 cases; however, your tree may have minimum terminal node sizes larger than 3. You can also change this threshold value in the Options subdialog box.
Minitab calculates the average of the negative log-likelihood function when the response is binary.
Compare the average –log-likelihood values for test from different models to determine the model with the best fit. The lower average –log-likelihood value indicates a better fit.
The ROC curve plots the true positive rate (TPR), also known as power, on the y-axis. The ROC curve plots the false positive rate (FPR), also known as type 1 error, on the x-axis. The area under an ROC curve indicates whether the classification tree is a good classifier.
For classification trees, the area under the ROC curve values range from 0.5 to 1. When a classification tree can perfectly separate the classes, then the area under the curve is 1. When a classification tree cannot separate the classes better than a random assignment, then the area under the curve is 0.5.
Minitab displays lift when the response is binary. The lift is the cumulative lift for the 10% of the data with the best chance of correct classification.
Lift represents the ratio of the target response divided by the average response. When lift is greater than 1, a segment of the data has a greater than expected response.
Misclassification cost is the relative misclassification cost. The cost is relative to a tree that predicts the most common outcome for every case. The relative cost accounts for the rate of error and the weighted cost.
The misclassification cost under Test represents the misclassification cost that occurs across all levels when Minitab uses the tree in the results instead of another tree to predict response values for new observations. Smaller values indicate that the tree in the results performs better. Values less than 1 indicate that the model in the results costs less than a model that predicts the most common outcome for every case.