Compute performance metrics for average receiver operating characteristic (ROC) curve in multiclass problem
computes the averages of performance metrics stored in the
AUC] = average(
rocObj for a multiclass classification problem using the averaging
method specified in
type. The function returns the average false
positive rate (
FPR) and the average true positive rate
TPR) for each threshold value in
The function also returns
AUC, the area under the ROC curve composed of
Find Average ROC Curve
Compute the performance metrics for a multiclass classification problem by creating a
rocmetrics object, and then compute the average values for the metrics by using the
average function. Plot the average ROC curve using the outputs of
fisheriris data set. The matrix
meas contains flower measurements for 150 different flowers. The vector
species lists the species for each flower.
species contains three distinct flower names.
Train a classification tree that classifies observations into one of the three labels. Cross-validate the model using 10-fold cross-validation.
rng("default") % For reproducibility Mdl = fitctree(meas,species,Crossval="on");
Compute the classification scores for validation-fold observations.
[~,Scores] = kfoldPredict(Mdl); size(Scores)
ans = 1×2 150 3
Scores is a matrix of size
3. The column order of
Scores follows the class order in
Mdl, stored in
rocmetrics object by using the true labels in
species and the classification scores in
Scores. Specify the column order of
rocObj = rocmetrics(species,Scores,Mdl.ClassNames);
rocmetrics computes the FPR and TPR at different thresholds and finds the AUC value for each class.
Compute the average performance metric values, including the FPR and TPR at different thresholds and the AUC value, using the macro-averaging method.
[FPR,TPR,Thresholds,AUC] = average(rocObj,"macro");
Plot the average ROC curve and display the average AUC value. Include (0,0) so that the curve starts from the origin
plot([0;FPR],[0;TPR]) xlabel("False Positive Rate") ylabel("True Positive Rate") title("Average ROC Curve") hold on plot([0,1],[0,1],"k--") legend(join(["Macro-average (AUC =",AUC,")"]), ... Location="southeast") axis padded hold off
Alternatively, you can create the average ROC curve by using the
plot function. Specify
AverageROCType="macro" to compute the metrics for the average ROC curve using the macro-averaging method.
type — Averaging method
Averaging method, specified as
averagefinds the average performance metrics by treating all one-versus-all binary classification problems as one binary classification problem. The function computes the confusion matrix components for the combined binary classification problem, and then computes the average FPR and TPR using the values of the confusion matrix.
averagecomputes the average values for FPR and TPR by averaging the values of all one-versus-all binary classification problems.
"weighted"(weighted macro-averaging) —
averagecomputes the weighted average values for FPR and TPR using the macro-averaging method and using the prior class probabilities (the
rocObj) as weights.
FPR — Average false positive rates
Average false positive rates, returned as a numeric vector.
TPR — Average true positive rates
Average true positive rates, returned as a numeric vector.
Receiver Operating Characteristic (ROC) Curve
Area Under ROC Curve (AUC)
One-Versus-All (OVA) Coding Design
Adjusted Scores for Multiclass Classification Problem
You can use the
plotfunction to create the average ROC curve. The function returns a
ROCCurveobject containing the
AUCproperties, which correspond to the output arguments
averagefunction, respectively. For an example, see Plot Average ROC Curve for Multiclass Classifier.
Introduced in R2022a