predict

Predict labels using discriminant analysis classifier

Description

example

label = predict(Mdl,X) returns a vector of predicted class labels for the predictor data in the table or matrix X, based on the trained discriminant analysis classifier Mdl.

example

[label,score,cost] = predict(Mdl,X) also returns:

• A matrix of classification scores (score) indicating the likelihood that a label comes from a particular class. For discriminant analysis, scores are posterior probabilities.

• A matrix of expected classification cost (cost). For each observation in X, the predicted class label corresponds to the minimum expected classification cost among all classes.

Examples

collapse all

Load Fisher's iris data set. Determine the sample size.

N = size(meas,1);

Partition the data into training and test sets. Hold out 10% of the data for testing.

rng(1); % For reproducibility
cvp = cvpartition(N,'Holdout',0.1);
idxTrn = training(cvp); % Training set indices
idxTest = test(cvp);    % Test set indices

Store the training data in a table.

tblTrn = array2table(meas(idxTrn,:));
tblTrn.Y = species(idxTrn);

Train a discriminant analysis model using the training set and default options.

Mdl = fitcdiscr(tblTrn,'Y');

Predict labels for the test set. You trained Mdl using a table of data, but you can predict labels using a matrix.

labels = predict(Mdl,meas(idxTest,:));

Construct a confusion matrix for the test set.

confusionchart(species(idxTest),labels)

Mdl misclassifies one versicolor iris as virginica in the test set.

Load Fisher's iris data set. Consider training using the petal lengths and widths only.

X = meas(:,3:4);

Train a quadratic discriminant analysis model using the entire data set.

Define a grid of values in the observed predictor space. Predict the posterior probabilities for each instance in the grid.

xMax = max(X);
xMin = min(X);
d = 0.01;
[x1Grid,x2Grid] = meshgrid(xMin(1):d:xMax(1),xMin(2):d:xMax(2));

[~,score] = predict(Mdl,[x1Grid(:),x2Grid(:)]);
Mdl.ClassNames
ans = 3x1 cell
{'setosa'    }
{'versicolor'}
{'virginica' }

score is a matrix of class posterior probabilities. The columns correspond to the classes in Mdl.ClassNames. For example, score(j,1) is the posterior probability that observation j is a setosa iris.

Plot the posterior probability of versicolor classification for each observation in the grid and plot the training data.

figure;
contourf(x1Grid,x2Grid,reshape(score(:,2),size(x1Grid,1),size(x1Grid,2)));
h = colorbar;
clim([0 1]);
colormap jet;
hold on
gscatter(X(:,1),X(:,2),species,'mcy','.x+');
axis tight
title('Posterior Probability of versicolor');
hold off

The posterior probability region exposes a portion of the decision boundary.

Input Arguments

collapse all

Trained discriminant analysis classifier, specified as a ClassificationDiscriminant model object trained with fitcdiscr, or a CompactClassificationDiscriminant model object created with compact.

Predictor data to be classified, specified as a numeric matrix or a table.

Each row of X corresponds to one observation, and each column corresponds to one variable. All predictor variables in X must be numeric vectors.

• For a numeric matrix, the variables that make up the columns of X must have the same order as the predictor variables used to train Mdl.

• For a table:

• predict does not support multicolumn variables and cell arrays other than cell arrays of character vectors.

• If you trained Mdl using a table (for example, Tbl), then all predictor variables in X must have the same variable names and data types as those used to train Mdl (stored in Mdl.PredictorNames). However, the column order of X does not need to correspond to the column order of Tbl. Tbl and X can contain additional variables (response variables, observation weights, and so on), but predict ignores them.

• If you trained Mdl using a numeric matrix, then the predictor names in Mdl.PredictorNames and corresponding predictor variable names in X must be the same. To specify predictor names during training, use the PredictorNames name-value argument of fitcdiscr. X can contain additional variables (response variables, observation weights, and so on), but predict ignores them.

Data Types: table | double | single

Output Arguments

collapse all

Predicted class labels, returned as a categorical or character array, logical or numeric vector, or cell array of character vectors.

For each observation in X, the predicted class label corresponds to the minimum expected classification cost among all classes. For an observation with NaN scores, the function classifies the observation into the majority class, which makes up the largest proportion of the training labels.

• label has the same data type as the observed class labels (Y) used to train Mdl. (The software treats string arrays as cell arrays of character vectors.)

• The length of label is equal to the number of rows of X.

Predicted class posterior probabilities, returned as a numeric matrix of size N-by-K. N is the number of observations (rows) in X, and K is the number of classes (in Mdl.ClassNames). score(i,j) is the posterior probability that observation i in X is of class j in Mdl.ClassNames.

Expected classification costs, returned as a matrix of size N-by-K. N is the number of observations (rows) in X, and K is the number of classes (in Mdl.ClassNames). cost(i,j) is the cost of classifying row i of X as class j in Mdl.ClassNames.

collapse all

Posterior Probability

The posterior probability that a point x belongs to class k is the product of the prior probability and the multivariate normal density. The density function of the multivariate normal with 1-by-d mean μk and d-by-d covariance Σk at a 1-by-d point x is

$P\left(x|k\right)=\frac{1}{{\left({\left(2\pi \right)}^{d}|{\Sigma }_{k}|\right)}^{1/2}}\mathrm{exp}\left(-\frac{1}{2}\left(x-{\mu }_{k}\right){\Sigma }_{k}^{-1}{\left(x-{\mu }_{k}\right)}^{T}\right),$

where $|{\Sigma }_{k}|$ is the determinant of Σk, and ${\Sigma }_{k}^{-1}$ is the inverse matrix.

Let P(k) represent the prior probability of class k. Then the posterior probability that an observation x is of class k is

$\stackrel{^}{P}\left(k|x\right)=\frac{P\left(x|k\right)P\left(k\right)}{P\left(x\right)},$

where P(x) is a normalization constant, the sum over k of P(x|k)P(k).

Prior Probability

The prior probability is one of three choices:

• 'uniform' — The prior probability of class k is one over the total number of classes.

• 'empirical' — The prior probability of class k is the number of training samples of class k divided by the total number of training samples.

• Custom — The prior probability of class k is the kth element of the prior vector. See fitcdiscr.

After creating a classification model (Mdl) you can set the prior using dot notation:

Mdl.Prior = v;

where v is a vector of positive elements representing the frequency with which each element occurs. You do not need to retrain the classifier when you set a new prior.

Cost

The matrix of expected costs per observation is defined in Cost.

Predicted Class Label

predict classifies so as to minimize the expected classification cost:

$\stackrel{^}{y}=\underset{y=1,...,K}{\mathrm{arg}\mathrm{min}}\sum _{k=1}^{K}\stackrel{^}{P}\left(k|x\right)C\left(y|k\right),$

where

• $\stackrel{^}{y}$ is the predicted classification.

• K is the number of classes.

• $\stackrel{^}{P}\left(k|x\right)$ is the posterior probability of class k for observation x.

• $C\left(y|k\right)$ is the cost of classifying an observation as y when its true class is k.

Alternative Functionality

To integrate the prediction of a discriminant analysis classification model into Simulink®, you can use the ClassificationDiscriminant Predict block in the Statistics and Machine Learning Toolbox™ library or a MATLAB® Function block with the predict function. For examples, see Predict Class Labels Using ClassificationDiscriminant Predict Block and Predict Class Labels Using MATLAB Function Block.

When deciding which approach to use, consider the following:

• If you use the Statistics and Machine Learning Toolbox library block, you can use the Fixed-Point Tool (Fixed-Point Designer) to convert a floating-point model to fixed point.

• Support for variable-size arrays must be enabled for a MATLAB Function block with the predict function.

• If you use a MATLAB Function block, you can use MATLAB functions for preprocessing or post-processing before or after predictions in the same MATLAB Function block.

Version History

Introduced in R2011b