Main Content


Train interpretable regression models and interpret complex regression models

Use inherently interpretable regression models, such as linear models, decision trees, and generalized additive models, or use interpretability features to interpret complex regression models that are not inherently interpretable.

To learn how to interpret regression models, see Interpret Machine Learning Models.


expand all

Local Interpretable Model-Agnostic Explanations (LIME)

limeLocal interpretable model-agnostic explanations (LIME) (Since R2020b)
fitFit simple model of local interpretable model-agnostic explanations (LIME) (Since R2020b)
plotPlot results of local interpretable model-agnostic explanations (LIME) (Since R2020b)

Shapley Values

shapleyShapley values (Since R2021a)
fitCompute Shapley values for query points (Since R2021a)
plotPlot Shapley values using bar graphs (Since R2021a)
boxchartVisualize Shapley values using box charts (box plots) (Since R2024a)
swarmchartVisualize Shapley values using swarm scatter charts (Since R2024a)

Partial Dependence

partialDependenceCompute partial dependence (Since R2020b)
plotPartialDependenceCreate partial dependence plot (PDP) and individual conditional expectation (ICE) plots
fitlmFit linear regression model
fitrgamFit generalized additive model (GAM) for regression (Since R2021a)
fitrlinearFit linear regression model to high-dimensional data
fitrtreeFit binary decision tree for regression


LinearModelLinear regression model
RegressionGAMGeneralized additive model (GAM) for regression (Since R2021a)
RegressionLinearLinear regression model for high-dimensional data
RegressionTreeRegression tree


Model Interpretation

Interpretable Models