Precison and recall

Source: Internet
Author: User

Source: http://blog.csdn.net/wangran51/article/details/7579100

Recently has been doing related recommendations of research and application work, recall rate and accuracy of the two concepts occasionally encountered,

Know the meaning, but sometimes it is very clear to the students to introduce a little turn.
Recall rate and accuracy are the two concepts and indicators often involved in data mining prediction, Internet search engine and so on.
Recall rate: Recall, also known as "recall"-or recall good remember, also more can reflect its substantial significance.
Accuracy rate: Precision, also known as "precision", "correct rate".

Take the search as an example, can be used to express the situation:

Related Not relevant
Retrieve the A B
Not retrieved C D

A: retrieved, related (found also wanted)
B: retrieved, but irrelevant (found but useless)
C: not retrieved, but relevant (not found, but actually wanted)
D: Not retrieved, also irrelevant (no search is useless)

If we want to: the more content is retrieved, the better, this is the pursuit of "recall", that is, A/(A+C), the bigger the better.

If we want to: the retrieved document, the more you really want, the more relevant, the better, the less relevant, the better,

This is the pursuit of "accuracy rate", that is A/(A+B), the bigger the better.

Although the "recall rate" and "accuracy rate" are not necessarily related (as can be seen from the above formula), in practical applications, are mutually restrictive.

To find a balance according to the actual needs.

It is often difficult to respond quickly to the "recall rate". I think it has something to do with the literal meaning, and the literal meaning of "recall" does not directly see its meaning.

"Recall" in Chinese means: Transfer xx back. The "recall rate" corresponds to the English "recall",

Recall besides the meaning of "order Sth to return" mentioned above, there is also the meaning of "remember".

Recall:the ability to remember sth. That you have learned or sth. That had happened in the past.

When we ask you to retrieve all the details of a thing in the system (Input search query word),

Recall refers to: the retrieval system can "recall" how many details of those things, popular is "the ability to recall."

"The number of details that can be recalled" divided by "the system knows all the details of the matter" is the "memory rate",

That is recall--recall rate. Simple, can also be understood as recall.

according to their own knowledge, the definition should certainly be right, where there may be errors in some representations.
Suppose there are two classes in the original sample, where:

1: There are a total of p categories of 1 samples, assuming that category 1 is a positive example.
2: There are a total of n categories of 0 samples, assuming that category 0 is a negative example.
After classification:
3: A sample with a TP category of 1 is correctly judged by the system to be classified as category 1,FN a sample of 1 is classified as category 0 by systematic miscalculation,

Apparently there are p=tp+fn;
4: A sample with FP category 0 is judged by the system to be classified as category 1,tn Class 0 is correctly sentenced to category 0,

Apparently there are n=fp+tn;

So:
Accuracy (Precision):
P = tp/(TP+FP); Reflects the specific gravity of the true positive sample in the positive example determined by the classifier (

Accuracy rate (accuracy)
A = (tp + tn)/(P+N) = (tp + TN)/(TP + FN + FP + TN);

It reflects the ability of the classifier to judge the whole sample-positive judgment is positive and negative judgment is negative.

Recall (Recall), also known as True Positive rate:
R = tp/(TP+FN) = 1-fn/t; Reflects the proportion of positive cases that are correctly judged as the total

Transfer sex (specificity, do not know this translation is right, this indicator is not used much),

Also known as True negativerate
S = tn/(TN + FP) = 1–fp/n; This and the recall rate are the corresponding indicators,

Just use it in the judging ability of category 0.

F-measure or Balanced F-score
F = 2 * recall rate * accuracy/(recall ratio + accuracy); This is what is traditionally said of F1 measure,

There are also some other F measure that can refer to the links below

The above introduction can be consulted:
Http://en.wikipedia.org/wiki/Precision_and_recall
At the same time, you can also see: http://en.wikipedia.org/wiki/Accuracy_and_precision

Why are there so many indicators?
This is because of pattern classification and machine learning needs. To determine the ability of a classifier to classify a sample or to use a different application,

Different indicators are required. When there is a total of 100 samples (p+n=100), if there is only one positive example (P=1),

Then just consider the accuracy, do not need to conduct any model training, directly to all test samples to be judged as a positive example,

So a can reach 99%, very high, but this does not reflect the real ability of the model. In addition, in the analysis of statistical signals,

The wrong penalty for judging the results of different classes is not the same. For example, the radar received a signal of 100 incoming missiles,

Only 3 of them are real missile signals, and the remaining 97 are missile signals that are simulated by the enemy. If the System Judges 98

(97 analog signals plus a real missile signal) are all analog signals, so accuracy=98%,

Very high, the remaining two is the missile signal, was cut off, then recall=2/3=66.67%,

Precision=2/2=100%,precision is also very high. But the rest of the missile will cause disaster.

Therefore, in statistical signal analysis, there are two additional indicators to measure the consequences of classifier error judgment:
Probability of Leak warning (Missing Alarm)
MA = fn/(TP + FN) = 1–tp/t = 1-r; reflects how many positive cases have been false negative.

(This is where the real missile signal is judged to be an analog signal, so the MA is 33.33%, too high)


False alarm probability (Alarm)
FA = FP/(TP + fp) = 1–p; The number of cases that are judged to be a positive sample is negative.


Statistical signal Analysis, I hope the above two error probability as small as possible. While the total penalty for classifiers is old

Is the combination of the above two errors plus the penalty factor: Cost = Cma *ma + Cfa * FA.

Different situations, needs, and different mistakes of the punishment is not the same. Like here, we naturally want to punish the police for leaking,

So it has a bigger penalty factor Cma.

Personal view: Although the above indicators can be converted to each other, but in the pattern classification,

The general use of P, R, a three indicators, not MA and FA. And in statistical signal analysis, it is seldom seen with R.

Well, in fact, I am not an IR expert, but I like IR, in recent years the domestic research in this area is quite a lot of people, Google and Baidu's strong, also shows the value of this direction. Of course, if you are studying IR, you don't have to look at the basics of what I'm writing. If you are a beginner or other subject, and want to know about these popular science-like knowledge, then this time I want to write this "Information Retrieval x Science" series may be able to help you. (I may not write very soon, forgive me)

Why is there a letter x in the middle of the name?

Why do you first talk about precision and recall? Because many of the algorithms in IR are evaluated using precision and recall to evaluate good or bad. So I'm going to talk about "good guy" and tell you he's a "good guy."

Algorithm and recalls (Precision & Recall)

Look at the following diagram to understand, and then the specific analysis. The following p represents Precision,r for recall

In layman's terms, Precision is how much of the retrieved entries (such as Web pages) are accurate, and recall is how much of the exact entries have been retrieved.

The following diagram describes the common concepts of true positive,false negative, and P and R are also often associated with them.

We certainly want to retrieve the result that the higher the better, the higher the r the better, but in fact the two are contradictory in some cases. For example, in extreme cases, we only search for a result and are accurate, then P is 100%, but R is very low, and if we return all the results, then the R is 100%, but p is very low.

Therefore, in different situations, it is necessary to judge whether the P is higher or r is higher. If you are doing an experimental study, you can draw Precision-recall curves to help with the analysis (which I should introduce later).

F1 Measure

As already mentioned, p and r indicators are sometimes contradictory, so is there any way to consider them comprehensively? I think the method must be a lot of, the most common method should be F Measure , some places also called F score, are the same.

F measure is a weighted harmonic average of precision and recall:

F = (a^2+1) p*r/a^2p +r

When the parameter is a=1, it is the most common F1:

F1 = 2p*r/(p+r)

It is easy to understand that F1 synthesized the results of P and R.

Precison and recall

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.