To judge the quality of a classifier in statistics, we need to quantify the number of mistakes it makes. There are two kinds of mistakes:
- false positives (FP)
- false negatives (FN)
We relate them to the number of correctly classified items:
- true positives (TP)
- true negatives (TN)
Depending on our application, there are several reasonable ways to relate those four numbers. The following graphic visualizes the most common combinations: false discovery rate (FDR), false positive rate (FPR), precision, recall, true positive rate (TPR), accuracy, specificity, and true negative rate (TNR).
This post is based on the wikipedia article on the confusion matrix which lists even more combinations.