Main Content

Interpretability

Train interpretable classification models and interpret complex classification models

Use inherently interpretable classification models, such as linear models, decision trees, and generalized additive models, or use interpretability features to interpret complex classification models that are not inherently interpretable.

To learn how to interpret classification models, see Interpret Machine Learning Models.

Functions

expand all

Local Interpretable Model-Agnostic Explanations (LIME)

limeLocal interpretable model-agnostic explanations (LIME)
fitFit simple model of local interpretable model-agnostic explanations (LIME)
plotPlot results of local interpretable model-agnostic explanations (LIME)

Shapley Values

shapleyShapley values
fitCompute Shapley values for query point
plotPlot Shapley values

Partial Dependence

partialDependenceCompute partial dependence
plotPartialDependenceCreate partial dependence plot (PDP) and individual conditional expectation (ICE) plots
fitcgamFit generalized additive model (GAM) for binary classification
fitclinearFit binary linear classifier to high-dimensional data
fitctreeFit binary decision tree for multiclass classification

Objects

ClassificationGAMGeneralized additive model (GAM) for binary classification
ClassificationLinearLinear model for binary classification of high-dimensional data
ClassificationTreeBinary decision tree for multiclass classification

Topics

Model Interpretation

Interpretable Models