Confusion matrix for Fit Model and Discover Key Predictors with TreeNet® Classification

Note

This command is available with the Predictive Analytics Module. Click here for more information about how to activate the module.

Find definitions and interpretations for every statistic in the Confusion matrix.
The Confusion matrix shows how well the tree separates the classes correctly using these metrics:
  • True positive rate (TPR) — the probability that an event case is predicted correctly
  • False positive rate (FPR) — the probability that a nonevent case is predicted incorrectly
  • False negative rate (FNR) — the probability that an event case is predicted incorrectly
  • True negative rate (TNR) — the probability that a nonevent case is predicted correctly

Interpretation

Confusion Matrix



Predicted Class
(Training)





Predicted Class (Test)
Actual ClassCountYesNo% CorrectYesNo% Correct
Yes (Event)1391241589.211102979.14
No164815695.122414085.37
All30313217192.4113416982.51
Assign a row to the event class if the event probability for the row exceeds 0.5.
     
StatisticsTraining (%)Test (%)
True positive rate (sensitivity or power)89.2179.14
False positive rate (type I error)4.8814.63
False negative rate (type II error)10.7920.86
True negative rate (specificity)95.1285.37

In this example, the total number of Yes events is 139, and the total number of No is 164.
  • In the training data, the number of predicted events (Yes) is 124, which is 89.21% correct.
  • In the training data, the number of predicted nonevents (No) is 156, which is 95.12% correct.
  • In the test data, the number of predicted events (Yes) is 110, which is 79.14% correct.
  • In the test data, the number of predicted nonevents (No) is 140, which is 85.37% correct.
Overall, the %Correct for the Training data is 92.41% and 82.51% for the Test data. Use the results for the test data to evaluate the prediction accuracy for new observations.

A low value for % Correct is usually due to a deficient fitted model, which can be caused by several different reasons. If the % Correct is very low, consider whether class weights may help. Class weights can help provide a more accurate model when observations from one class weigh more than observations from a different class. Also, you can change the probability that is required for a case to be classified as the event.