Article Author: Tyan
Blog: noahsnail.com | CSDN | Jane book 1. Basic Concepts 1.1 Roc and AUC
ROC curves and AUC are often used to evaluate the merits of a binary classifier (binary classifier), and the ROC curve is called the subject's working characteristic curve (receiver operating characteristic curve, or ROC curve), Also known as the susceptibility curve (sensitivity curve), the AUC (area under Curve) is a section of the ROC curve. Before you calculate the ROC curve, there are some basic concepts to understand first. There are four predictive results in the two-tuple model to determine if a person is ill: true positive (TP): Diagnosed as having, in fact, also diseased. Pseudo-positive (FP): The diagnosis is true, but there is no disease. True Negative (TN): no diagnosis, no disease in fact. Pseudo negative (FN): not diagnosed, but actually sick.
The relationship is shown in the following figure:
The ROC space defines the pseudo-positive rate (FPR) as the x-axis, and the true positive rate (TPR) is defined as the y-axis. TPR: In all samples that are actually positive, the ratio of positive is correctly judged, TPR=TPTP+FN Tpr=\frac {TP} {TP+FN}. FPR: In all samples that are actually negative, the ratio of positive is wrongly judged, Fpr=fpfp+tn Fpr=\frac {FP} {fp+tn}. 1.2 Precision, recall and F1
Another common evaluation indicator for two classification problems is the accuracy rate (precision) and recall (recall) and F1 values. The accuracy rate is expressed as the percentage of samples that are positively positive in the sample that is predicted to be positive. The accuracy rate is defined as P=TPTP+FP P=\frac {TP} {TP+FP}. The recall rate indicates the percentage of positive samples that are predicted to be true positive. Recall rate is defined as R=TPTP+FN R=\frac {TP} {tp+fn},f1 value is the harmonic mean of the accuracy and recall rate, the formula is F1=2prp+r F1=\frac {2PR} {p+r}. When both precision and recall rates are high, the F1 value is also high. In general, precision and recall are contradictory to each other. 2. Curve Introduction 2.1 roc Curve
The ROC curve coordinate system is shown in the following figure, and the dashed line is the probability of random guessing, which is the same as the probability of guessing the error. Ideally, we would like to FPR for 0, no false positive, TPR 1, that is all true positive, at this time all samples are correctly classified, the point is located in the upper left corner (0,1) location, there is no wrong data, this is the most perfect situation, the reality is basically impossible. If the point is below the dashed line, for example, C, it means that the classification error is much, the classification is correct, and this is not what we want. If the point is above the dashed line, such as the C′c \prime point, it indicates that the classification error is less, the classification is more correct, and this is what we want, so we want the ROC curve to be as close as possible to the upper left corner. For a particular classifier and test data set, you can only get a classification result, that is, in the ROC curve coordinate system, then how to get a ROC curve. In the classification problem, we often get the probability that a sample is a positive sample, judging whether a sample is a positive sample based on the comparison of the probability value with the threshold value. Different TPR and FPR values can be obtained under different thresholds, that is, we can get a series of points, draw them in the graph, and then connect them to get the ROC curve, the more the threshold value, the more smooth the ROC curve.
AUC is the area under the ROC Curve, its area will not be greater than 1, since the ROC curve is generally above the straight line y=x, the value range of the AUC is usually between (0.5,1). Because the ROC curve can not well see the quality of the classifier model, the AUC value is used to evaluate and compare the classifier model. Usually the greater the AUC value, the better the classifier performance.
In the basic concept we mentioned the accuracy rate, recall rate and F1 value, since they are the evaluation index of two classification, why use ROC and AUC? This is because the ROC curve has a very good feature: when the positive and negative sample distributions in the test set change, i.e. the number of positive and negative samples varies significantly, the ROC curve remains unchanged. The sample quantity imbalance often occurs in the actual data set, and the distribution of positive and negative samples in the test data may vary over time. The following figure is a comparison of the ROC curves of two classifier models (algorithms):
2.2 p-r Curve
In the P-r curve, the precision is the horizontal axis and the recall is the ordinate. In the ROC curve, the higher the curve is, the higher the upper-left corner is, in the P-r curve, the more convex the curve the better the upper right corner. P-r Curve Judging model is good or bad according to the specific circumstances of the specific analysis, some projects require a higher recall rate, some project requirements of high accuracy. The drawing of the P-r curve is the same as that of the ROC curve, which gets different precision and Recall under different thresholds, obtains a series of points, plots them in the P-r diagram, and then joins them to get the P-r diagram. Two classifier model (algorithm) An example of a p-r curve comparison is shown in the following figure:
2.3 roc vs. P-r
It can be seen from the formula calculation that the formula of the true positive rate TPR in ROC curve is the same as that of the recall rate recall in the P-r curve, that is, the two are different terms in different environments. When the positive and negative sample gap is not big, the ROC curve and the p-r trend is similar, but when the negative sample is many, the ROC curve effect is still good, but the p-r curve effect is general. 3. Demo
Cond.