Keywords: PR curve, ROC curve, Machine Learning, image processing
To help you understand, for example, we need to detect a person in an image, the classifier divides each pixel on the image into human and non-pixel, the target is a person, so the detection of human pixels with positives, detected as non-human pixels used negatives to indicate, detected need to report, Detection is not a need to reject, detected the actual is not the false positive (incorrectly reported) or the wrong test (non-target as the target), the detected is not detected is false or missed
These parameters are the basis for commonly used evaluation methods. Through the values of these parameters we can calculate the ROC space and PR space of a point, multiple images can be more than one point, even the curve is called the ROC curve and PR curve.
Roc space (for real results gound Truth)
Horizontal axis False Positive rate (FPR) = The proportion of the target in a non-target pixel (the smaller the better)
Vertical axis True Positive rate (TPR) = The percentage that is correctly checked out in the target pixel (the larger the better)
PR Space (for correctness of test results)
Horizontal Recall = The proportion that is correctly checked out in the actual target pixel of TPR (the larger the better)
longitudinal axis Precision = Correct proportions in the detected target pixel, detection accuracy (the bigger the better)
Summary Chart
来自论文The Relationship Between Precision-Recall and ROC Curves
A detailed discussion can be seen in the paper, which leads to key paragraphs
2. Review of ROC and Precision-recall
In a binary decision problem, a classifier labels ex-
Amples as either positive or negative. The decision
Made by the classifier can is represented in a struc-
Ture known as a confusion matrix or contingency ta-
ble. The confusion matrix has four categories:true
Positives (TP) is examples correctly labeled as posi-
Tives. False Positives (FP) refer to negative examples
incorrectly labeled as positive. True Negatives (TN)
correspond to negatives correctly labeled as negative.
Finally, False negatives (FN) refer to positive examples
incorrectly labeled as negative.
A confusion matrix is shown in Figure 2 (a). The con-
Fusion matrix can used to construct a point in either
ROC space or PR space. Given The confusion matrix,
We is able to define the metrics used in each space
As in Figure 2 (b). In ROC space, one plots the False
Positive rate (FPR) on the x-axis and the True pos-
Itive rate (TPR) on the y-axis. The FPR measures
The fraction of negative examples that is misclassi-
Fied as positive. The TPR measures the fraction of
Positive examples that is correctly labeled. In PR
Space, one plots Recall on the x-axis and Precision on
The Y-axis. Recall is the same as TPR, whereas pre-
Cision measures that fraction of examples classified as
Positive that is truly positive. Figure 2 (b) gives the
Definitions for each metric. We'll treat the metrics
As functions the act on the underlying confusion ma-
Trix which defines a point in either ROC space or PR
Space. Thus, given a confusion matrix A, RECALL (a)
Returns the Recall associated with A.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Image Detection Classic evaluation Method--PR curve, Roc curve