0-1 Predictions for test sets
Accuracy: The forecast pair/total forecast, including 0 of the forecast pair also includes 1 of the forecast pair, usefulness: represents the overall alignment of the model, the higher the model the more accurate
Accuracy: predicted to be 1 accuracy, usefulness: represents 1 of the degree of alignment
Recall: The predicted 1 accounted for the true 1 percentage, use: Represents the forecast 1 coverage
Example:
Now to recommend stocks, analyst A, both want to predict which to rise, and want to predict which is going to fall, so, he is concerned about the accuracy of the forecast; Analyst B, without asking for the accuracy rate, only the stock he recommended can rise, he is concerned about his prediction accuracy; Analyst C, he wants to predict all the stocks that are up, He is concerned with recall.
If there were 100 stocks on the market, 20 rose and 80 fell, assuming the rise was 1, down to 0.
Analyst A: Forecast 20 is up, 80 is down, which predicts a 15 rise and a 75 drop, leaving 10 to be wrong, then his accuracy is 90%, precision (forecast up) 75%, check full rate 75%
Analyst B: Forecast 10 only up, 90 fell, which forecast for 10 only rose and 80 fell, leaving 10 only the wrong prediction, then his accuracy of 90%, precision (forecast rise) 100%, check the full rate of 50%
Analyst C: Forecast 30 only up, 70 fell, which forecast for 20 only rose and 70 fell, leaving 10 only to predict wrong, then his accuracy 90%, precision (forecast rise) 66%, check full rate 100%
We can see that, although the accuracy of the same three, in the stock of money, we will choose B, under certain circumstances, our focus is different.
F1 is 2* (precision*recall)/(Precision+recall), the higher the F1 the better, but also to specific problems specific analysis
Analyst A:0. 75
Analyst B:0. 66
Analyst C:0. 80