Machine Learning: ROC curve of classification algorithm performance indicators and performance indicator roc
Before introducing the ROC curve, let's talk about the confusion matrix and two formulas, because this is the basis for ROC curve calculation.
1. Example of confusion matrix (whether to click advertisement ):
Note:
TP: the prediction results are consistent with the actual results, and all click ads.
FP: The prediction result is clicked, but the actual situation is not clicked.
FN: The prediction result is not clicked, but the actual situation is clicked.
TN: The prediction result is not clicked, and the actual situation is not clicked.
2. Two formulas:
1) true rate:
TPR = TP/(TP + FN)
2) false positive rate
FPR = FP/(FP + TN)
3. The ROC curve is the variation of real rate with false positive rate. The following code is used to demonstrate (the sklearn package contains related algorithms ):
# Import related packages import numpy as npfrom sklearn import metricsimport matplotlib. pyplot as plt # Set y value: indicates the actual value y = np. array ([1, 1, 2, 2]) # Set pred value: pred = np. array ([0.1, 0.4, 0.35, 0.8]) # Calculate related data: Pay attention to the returned result sequence fpr, tpr, thresholds = metrics. roc_curve (y, pred, pos_label = 2) # Calculate the area roc_auc = metrics under the curve. auc (fpr, tpr) # plot plt. clf () plt. plot (fpr, tpr, label = 'roc curve (area = % 0.2f) '% roc_auc) plt. plot ([0, 1], [0, 1], 'K -- ') plt. xlim ([0.0, 1.0]) plt. ylim ([0.0, 1.0]) plt. xlabel ('false Positive rate') plt. ylabel ('true Positive rate') plt. legend (loc = "lower right") plt. show ()
Result:
4. ROC curve
1) The ROC curve for linear random classification as shown in the dotted line. Generally, it is drawn to the graph as a reference point.
2) for a perfect classifier, the ROC curve should be from () to (), and then horizontally connected to the line ()
3) The closer the ROC curve is to the upper left corner, the better the classification effect.
5. AUC
1) AUC indicates the area below the curve
2) for a perfect classifier, the AUC value should be 1
3) For a random prediction classifier (a virtual line in the figure), the AUC area is 0.5 square meters.
4) The larger the AUC area, the better the classification effect.