Use inherently interpretable regression models, such as linear models, decision trees, and generalized additive models, or use interpretability features to interpret complex regression models that are not inherently interpretable.
To learn how to interpret regression models, see Interpret Machine Learning Models.
Interpret Trained Model
Local Interpretable Model-Agnostic Explanations (LIME)
|Local interpretable model-agnostic explanations (LIME)|
|Fit simple model of local interpretable model-agnostic explanations (LIME)|
|Plot results of local interpretable model-agnostic explanations (LIME)|
- Interpret Machine Learning Models
Explain model predictions using the
shapleyobjects and the
- Shapley Values for Machine Learning Model
Compute Shapley values for a machine learning model using two algorithms: kernelSHAP and the extension to kernelSHAP.
- Introduction to Feature Selection
Learn about feature selection algorithms and explore the functions available for feature selection.
- Interpret Regression Models Trained in Regression Learner App
Determine how features are used in trained regression models by using partial dependence plots.
- Train Linear Regression Model
Train a linear regression model using
fitlmto analyze in-memory data and out-of-memory data.
- Train Generalized Additive Model for Regression
Train a generalized additive model (GAM) with optimal parameters, assess predictive performance, and interpret the trained model.
- Train Regression Trees Using Regression Learner App
Create and compare regression trees, and export trained models to make predictions for new data.