Example of Discover Best Model (Binary Response)

Note

This command is available with the Predictive Analytics Module. Click here for more information about how to activate the module.

Search for the best type of model

A team of researchers collects and publishes detailed information about factors that affect heart disease. Variables include age, sex, cholesterol levels, maximum heart rate, and more. This example is based on a public data set that gives detailed information about heart disease. The original data are from archive.ics.uci.edu.

The researchers want to find a model that makes the most accurate predictions that are possible. The researchers use Discover Best Model (Binary Response) to compare the predictive performance of 4 types of models: binary logistic regression, TreeNet®, Random Forests® and CART®. The researchers plan to further explore the type of model with the best predictive performance.

  1. Open the sample data, HeartDiseaseBinaryBestModel.MTW.
  2. Choose Predictive Analytics Module > Automated Machine Learning > Discover Best Model (Binary Response).
  3. In Response, enter 'Heart Disease'.
  4. In Continuous predictors, enter Age, 'Rest Blood Pressure', Cholesterol, 'Max Heart Rate', and ' Old Peak'.
  5. In Categorical predictors, enter Sex, ' Chest Pain Type', 'Fasting Blood Sugar', 'Rest ECG', 'Exercise Angina', Slope, 'Major Vessels', and Thal.
  6. Click OK.

Interpret the results

The Model Selection table compares the performance of the different types of models. The Random Forests® model has the minimum value of the average –loglikelihood. The results that follow are for the best Random Forests® model.

The Misclassification Rate vs Number of Trees Plot shows the entire curve over the number of trees grown. The misclassification rate is approximately 0.16.

The Model summary table shows that the average negative loglikelihood is approximately 0.39.

The Relative Variable Importance graph plots the predictors in order of their effect on model improvement when splits are made on a predictor over the sequence of trees. The most important predictor variable is Thal. If the contribution of the top predictor variable, Thal, is 100%, then the next important variable, Major Vessels, has a contribution of 98.9%. This means Major Vessels is 98.9% as important as Thal in this classification model.

The confusion matrix shows how well the model correctly separates the classes. In this example, the probability that an event is predicted correctly is approximately 87%. The probability that a nonevent is predicted correctly is approximately 81%.

The misclassification rate helps indicate whether the model will accurately predict new observations. For prediction of events, the out-of-bag misclassification error is approximately 13%. For the prediction of nonevents, the misclassification error is approximately 19%. Overall, the misclassification error for the test data is approximately 16%.

The area under the ROC curve for the Random Forests® model is approximately 0.90 for the out-of-bag data.

Discover Best Model (Binary Response): Heart Disease vs Age, Rest Blood Pressure, Cholesterol, Max Heart Rate, Old Peak, Sex, Chest Pain Type, Fasting Blood Sugar, Rest ECG, Exercise Angina, Slope, Major Vessels, Thal

Method

Fit a stepwise logistic regression model with linear terms and terms of order 2.
Fit 6 TreeNet® Classification model(s).
Fit 3 Random Forests® Classification model(s) with bootstrap sample size same as training data size of 303.
Fit an optimal CART® Classification model.
Select the model with maximum loglikelihood from 5-fold cross-valuation.
Total number of rows: 303
Rows used for logistic regression model: 303
Rows used for tree-based models: 303

Binary Response Information

VariableClassCount%
Heart Disease1 (Event)16554.46
  013845.54
  All303100.00
Best Model within TypeAverage
-Loglikelihood
Area Under
ROC Curve
Misclassification
Rate
Random Forests®*0.39040.90480.1584
TreeNet®0.39070.90320.1520
Logistic Regression0.46710.91420.1518
CART®1.80720.79910.2080
* Best model across all model types with minimum average -loglikelihood. Output for the best
     model follows.

Hyperparameters for Best Random Forests® Model

Number of bootstrap samples300
    Sample sizeSame as training data size of 303
Number of predictors selected for node splittingSquare root of the total number of predictors = 3
Minimum internal node size8

Model Summary

Total predictors13
Important predictors13
StatisticsOut-of-Bag
Average -loglikelihood0.3904
Area under ROC curve0.9048
        95% CI(0.8706, 0.9389)
Lift1.7758
Misclassification rate0.1584

Confusion Matrix


Predicted Class (Out-of-Bag)
Actual ClassCount10% Correct
1 (Event)1651432286.67
01382611281.16
All30316913484.16
StatisticsOut-of-Bag
(%)
True positive rate (sensitivity or power)86.67
False positive rate (type I error)18.84
False negative rate (type II error)13.33
True negative rate (specificity)81.16

Misclassification


Out-of-Bag
Actual ClassCountMisclassed% Error
1 (Event)1652213.33
01382618.84
All3034815.84

Select an alternative model

The researchers can look at results for other models from the search for the best model. For a TreeNet® model, you can select from a model that was part of the search or specify hyperparameters for a different model.

  1. After the Model Selection table, click Select an Alternative Model.
  2. In Model Type, select TreeNet®.
  3. In Select an existing model, choose the third model, which has the best value of the minimum average –loglikelihood.
  4. Click Display Results.

Interpret the results

This analysis grows 300 trees and the optimal number of trees is 46. The model uses a learning rate of 0.1 and a subsample fraction of 0.5. The maximum number of terminal nodes per tree is 6.

The Average –Loglikelihood vs Number of Trees Plot shows the entire curve over the number of trees grown. The optimal value for the test data is 0.3907 when the number of trees is 46.

Model Summary

Total predictors13
Important predictors13
Number of trees grown300
Optimal number of trees46
StatisticsTrainingTest
Average -loglikelihood0.20880.3907
Area under ROC curve0.98420.9032
        95% CI(0.9721, 0.9964)(0.8683, 0.9381)
Lift1.83641.7744
Misclassification rate0.07260.1520

When the number of trees is 46, the Model summary table indicates that the average negative loglikelihood is approximately 0.21 for the training data and approximately 0.39 for the test data.

The Relative Variable Importance graph plots the predictors in order of their effect on model improvement when splits are made on a predictor over the sequence of trees. The most important predictor variable is Chest Pain Type. If the contribution of the top predictor variable, Chest Pain Type, is 100%, then the next important variable, Thal, has a contribution of 95.8%. This means that Thal is 95.8% as important as Chest Pain Type in this classification model.

Confusion Matrix



Predicted Class
(Training)





Predicted Class (Test)
Actual ClassCount10% Correct10% Correct
1 (Event)165156994.551471889.09
01381312590.582811079.71
All30316913492.7417512884.82
Assign a row to the event class if the event probability for the row exceeds 0.5.
     
StatisticsTraining (%)Test (%)
True positive rate (sensitivity or power)94.5589.09
False positive rate (type I error)9.4220.29
False negative rate (type II error)5.4510.91
True negative rate (specificity)90.5879.71

The confusion matrix shows how well the model correctly separates the classes. In this example, the probability that an event is predicted correctly is approximately 89%. The probability that a nonevent is predicted correctly is approximately 80%.

Misclassification



TrainingTest
Actual ClassCountMisclassed% ErrorMisclassed% Error
1 (Event)16595.451810.91
0138139.422820.29
All303227.264615.18
Assign a row to the event class if the event probability for the row exceeds 0.5.

The misclassification rate helps to indicate whether the model will accurately predict new observations. For the prediction of events, the test misclassification error is approximately 11%. For the prediction of nonevents, the misclassification error is approximately 20%. Overall, the misclassification error for the test data is approximately 15%.

The area under the ROC curve when the number of trees is 46 is approximately 0.98 for the training data and is approximately 0.90 for the test data.

In this example, the gain chart shows a sharp increase above the reference line, then a flattening. In this case, approximately 60% of the data account for approximately 90% of the true positives. This difference is the extra gain from using the model.

In this example, the lift chart shows a large increase above the reference line that begins to decline faster after approximately 50% of the total count.

Use the partial dependency plots to gain insight into how the important variables or pairs of variables affect the fitted response values. The fitted response values are on the 1/2 log scale. The partial dependence plots show whether the relationship between the response and a variable is linear, monotonic, or more complex.

For example, in the partial dependence plot of the chest pain type, the 1/2 log odds is highest at the value of 3. Click Select More Predictors to Plot to produce plots for other variables