Evaluation Index:
1. Rank-n: The probability that the top (highest confidence) n graph in the search results has the correct result.
CMC Curve: Calculates a top-k hit probability, which is mainly used to evaluate the correct rate of rank in closed concentration.
For example: Lable is a m1 and is searched in 100 samples.
If the recognition results are M1, M2, M3, M4, M5 ..., the correct rate for rank-1 at this time is 100%;rank-2 and the correct rate for 100%;rank-5 is also 100%, if the recognition result is M2, M1, M3, M4, M5 ..., At this time the correct rate for the rank-1 is 0%;rank-2 the correct rate for the 100%;rank-5 of the correct rate is also 100%, if the recognition result is M2, M3, M4, M5, M1 ..., then rank-1 the correct rate of 0%;rank-2 is the correct rate of 0% The correct rate of rank-5 is 100%.
When the set of faces to be identified is many, the average is taken.
2, Recall: Recall rate output of 1 ground truth also 1 of the probability of 1 of the output
Precision: The probability that the accuracy output is 1 ground truth and 1 is ground truth 1.
Harmonic averages of F-score:recall and precision 2 * p * r/(P + r)
PR curve: Precision and recall of all samples are plotted in the diagram.
3, the area under the MAP:PR curve
For example: Query-id = 1,query-cam = 1,gallery A total of 5 graphs, according to the following figure to calculate the recall and precision, recall as the horizontal axis, precision as ordinate, draw PR curve, the area below the curve is the AP, When more than one person needs to be retrieved, the average map for everyone is taken at this time.
There are many ways to calculate the aspect product in the curve, such as AP = AP + (recall-old_recall) * ((old_precision+precision)/2);
APS measure the quality of the models that are learned in a single category, and map measures the quality of the models that are learned in all categories.
4. CMC
CMC curve is a kind of top-k hit probability.
For single gallery shot, each time query, the samples sort, find matching on the ID of gallery, exclude the same camera with an ID of sample,
5, ROC:
Each point on the curve reflects the relationship between the different thresholds corresponding to the FP (false positive) and TP (true positive). Typically, the closer the ROC curve (0,1) coordinates, the better the performance.
The Tp:true Positive forecast is 1, and the actual 1;tn:true nagetive is predicted to be 0, actually 0.
The Fp:false Positive prediction is 1, the actual is 0, the Fn:false nagetive is 0, the actual is 1.
tpr=tp/(TP+FN) =recall.
fpr=fp/(FP+TN), FPR is a person who is actually a good person, the proportion of people who are predicted to be bad.
The ROC curve is obtained by using FPR as the x-axis and TPR to draw the y-axis.