"Reprint" Bayesian inference and its application in the Internet (i): A brief introduction to theorem

Source: Internet
Author: User
Tags vcard

Nanyi

Original link: http://www.ruanyifeng.com/blog/2011/08/bayesian_inference_part_one.html

First, what is Bayesian inference

Bayesian inference (Bayesian inference) is a statistical method used to estimate a certain property of statistics.

It is the application of Bayesian theorem (Bayes ' theorem). In a 1763 paper published by British mathematician Thomas Beyes, this theorem was first proposed.

Bayesian inference is very different from other statistical inference methods. It is based on subjective judgments, that is, you can not need objective evidence, first estimate a value, and then according to the actual results are constantly revised. It is because its subjectivity is too strong, has been criticized by many statisticians.

Bayesian inference requires a lot of computation, so it has not been widely used for a long time in history. Only when the computer was born did it get real attention. It is found that many statistics can not be objectively judged in advance, and the advent of the internet era of large datasets, coupled with high-speed computing capabilities, to verify these statistics to provide convenience, but also for the application of Bayesian inference to create conditions, its power is increasingly emerging.

Second, Bayes theorem

To understand Bayesian inference, the Bayes theorem must be understood first. The latter is actually the formula for calculating "conditional probabilities".

The so-called "conditional probability" (Conditional probability) refers to the probability that event a takes place in the case of event B, with P (a| B) to indicate.

According to the Venturi diagram, it is clear that in the case of event B, the probability of event a occurring is P (A∩B) divided by P (B).

So

b Probability of occurrence of a occurring in case B (b occurs and a occurs)

The same can be done,

So

That

This is the formula for calculating the conditional probabilities.

Three, full probability formula

In addition to the conditional probabilities, the full-probability formula is deduced for the later use.

Assuming the sample space S, is two events a and a '.

, the red part is event A, and the green part is event a ', which together form the sample space S.

In this case, event B can be divided into two parts.

That

In the derivation of the previous section, we know

So

This is the full probability formula. What it means is that if a and a ' constitute a division of the sample space, then the probability of event B is equal to the probability of a and a ' respectively multiplied by B for the sum of the conditional probabilities of the two events.

Substituting this formula into the conditional probability formula of the previous section, we get another way of saying the conditional probabilities:

Iv. Implications of Bayesian inference

To deform the conditional probability formula, the following form can be obtained:

We refer to P (A) as the "prior probability" (Prior probability), which is a judgement of the probability of a event before the B event occurs. P (a| b) is called the "posterior probability" (posterior probability), that is, after the B event, we re-evaluate the probability of a event. P (b| A)/P (B) is called a "probability function" (likelyhood), which is an adjustment factor that makes the estimated probability closer to the true probability.

Therefore, the conditional probability can be understood as the following equation:

Posteriori probability = Prior probability x adjustment factor

This is what Bayesian inference means. We first estimate a "prior probability", and then add the experimental results to see whether the experiment is to enhance or weaken the "prior probability", thus getting closer to the fact of the "posterior probability."

Here, if the "Probability function" P (b| A)/P (B) >1, which means that the "prior probability" is enhanced, the likelihood of event a becomes larger; if "probability function" = 1, it means that the B event does not help to judge the probability of event A; if the "probability function" <1 means that the "prior probability" is weakened, the likelihood of event a becomes smaller.

Five, "example" fruit sugar problem

To deepen our understanding of Bayesian inference, we look at two examples.

A first example. Two identical bowls, one bowl with 30 fruit sugar and 10 chocolate candy, second bowls have fruit sugar and chocolate candy 20. Now randomly choose a bowl, from which to touch a sugar, found to be fruit candy. What is the probability of this fruit candy coming from a bowl?

We assume that H1 represents a bowl, and the H2 represents the Bowl No. second. Since these two bowls are the same, so P (H1) =p (H2), that is, the two bowls are selected in the same probability before the fruit sugar is removed. So, P (H1) = 0.5, we call this probability "a priori probability", that is, before the experiment, the probability of a bowl from a number is 0.5.

Again, e means fruit sugar, so the question becomes what is the probability of a bowl in the known e case, that is, p (h1| E). We call this probability "posterior probability", which is a correction to P (H1) after the E event occurs.

According to the conditional probability formula, we get

Known, P (H1) equals 0.5,p (e| H1) for the probability of removing fruit sugar in a bowl, equal to 0.75"to determine H1 has occurred, so no longer *0.5", then find out P (E) can get the answer. According to the full probability formula,

So

By substituting the numbers into the original equation, the

This shows that the probability of a bowl from a number is 0.6. That is, the likelihood of the H1 event has been enhanced after the fruit sugar has been removed.

Vi. "Examples" of false positives

The second example is a medical FAQ that is closely related to real life.

The incidence of a disease is known to be 0.001, which means that 1 of the 1000 people are ill. A reagent can be used to test whether a patient is ill, its accuracy is 0.99, that is, in cases where the patient does get sick, 99% of it may appear positive. Its false positive rate is 5%, that is, in cases where the patient does not have a disease, 5% of it is likely to appear masculine. What is the likelihood of a patient having a positive test result?

If A event indicates illness, p (a) is 0.001. This is the "prior probability", that is, no test is done before we anticipate the morbidity. Assuming that the B event is positive, the calculation is P (a| B). This is the "posterior probability", which is an estimate of the morbidity after the test has been done.

According to the conditional probability formula,

Using the full probability formula to rewrite the denominator,

Put the numbers in,

We got an amazing result, P (a| B) is approximately equal to 0.019. In other words, even if the test is positive, the probability of the patient getting sick is only increased from 0.1% to about 2%. This is called "false positive", that is, the positive result is not enough to explain the patient's illness.

Why is that? Why is the accuracy of this test as high as 99%, but less than 2% credibility? The answer is related to the high rate of false positives. ("Exercise" if the false positive rate from 5% to 1%, ask the patient how much the probability of becoming sick? )

Interested friends, you can also calculate the "false negative" problem, that is, the test results are negative, but the patient is indeed the probability of illness. Then ask yourself, "false positive" and "false negative", which is the main risk of medical testing?

===================================

The principle part of Bayesian inference is here today. Next, you'll learn how to use Bayesian inference to filter spam messages.

"Reprint" Bayesian inference and its application in the Internet (i): A brief introduction to theorem

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.