Performance Measures for Classifiers: Precision, Recall, and F1
Here is a new, simple tutorial on how to evaluate the quality of a classifier. The attached doc shows you how to construct a confusion matrix, compute the precision, recall, and f1 scores for a classifier, and to construct a precision/recall chart in R to compare the relative strengths and weaknesses of different classifiers.
performance-measures-classifiers-75-925
Granted, these measures are not perfect. Powers (2011), in the Journal of Machine Learning Technologies, advises that they should not be used without a clear understanding of the biases, especially considering the power of intelligent prediction vs. the power of the guess. However, they should provide a decent basis for practitioners to compare different classification strategies. (Notice that you don’t even need algorithms to do this… you can generate a confusion matrix from any plant operation or business activity where classification is performed!)