What is a good performance metric for pixels classification tasks
2 views (last 30 days)
Show older comments
Memo Remo
on 7 Dec 2021
Commented: Constantino Carlos Reyes-Aldasoro
on 7 Dec 2021
Hi everyone,
I recently developed a machine-learning based algorithm to identify some specific regions of interest in a series of images. The resolution of images are all the same, but the size of the required regions of interest varies in this stack of images. After performing this classification task, I tried to validate the algorithm's performance using ROC curves.
The problem is that these curves cannot be used to compare how well the classification is done in different images because for those images in which the size of targeted regions are smaller than the rest of images, there will be a huge number of true negative pixels that can spuriously increase AUC values for these images regardless of how well the algorithm could identify true positive pixels. As a result, we may get higher AUC value for terrible classification outcome for images that have a small region of interest compared to those that have large region of interest and very good classification results.
Does anyone know how we can overcome this limitation of ROC curves?
0 Comments
Accepted Answer
Constantino Carlos Reyes-Aldasoro
on 7 Dec 2021
Edited: Constantino Carlos Reyes-Aldasoro
on 7 Dec 2021
One good metric is the Jaccard Index, or the Intersection over Union. What this metric does is to divide true positives over the sum of true positives, false positives and false negatives and ignores the true negatives. The Dice index is basically the same but with a different calculation.
2 Comments
More Answers (0)
See Also
Categories
Find more on ROC - AUC in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!