Deduction of naive Bayesian theory and three kinds of common models

Source: Internet
Author: User

Naive Bayesian (Naive Bayes) is a simple classification algorithm, its classic application cases are well-known: text categorization (such as spam filtering). A lot of textbooks from these cases, this article does not repeat these content, and focus on the theoretical deduction (in fact, not to be "theoretical" scare), three common models and their coding implementation (Python).

If you are not interested in the theoretical derivation process, you can go directly to the three common models and coding implementation part, but I suggest you look at the theoretical basis of the section.

In addition, all of the code in this article can be obtained from my GitHub 1. The theoretical basis of naive Bayes

Naive Bayesian algorithm is a classification method based on Bayesian theorem and independent hypothesis of feature conditions.

The Bayesian theorem and the independent hypothesis of characteristic conditions are two important theoretical bases of naive Bayes. 1.1 Bayes theorem

First look at what is conditional probability .

P (a| b indicates that event B has occurred, and the probability of event A is called the conditional probability of event A in event B. The basic solution formula is: P (a| b) =p (AB) p (b) p (a| b) =\frac{p (AB)}{p (b)}

Bayesian theorem is based on conditional probability, through P (a| B) to ask P (b| A):

P (b| A) =p (a| b) p (b) p (A) p (b| A) =\frac{p (a| b) P (b)}{p (A)}

Incidentally, the denominator P (A) in the upper formula can be decomposed according to the full probability formula:

P (A) =∑ni=1p (Bi) p (a| Bi) P (A) =\sum_{i=1}^{n}p (B_{i}) p (a| B_{i}) 1.2 feature conditions independent assumption

This section begins with the theory of Naive Bayes, from which you will have a deep understanding of what is a feature condition independent hypothesis.

Given a training dataset (X,y), where each sample X includes an n-dimensional feature, that is, x= (x1,x2,x3,..., xn) x= ({x_{1},x_{2},x_{3},..., X_{n}), and the class tag collection contains k categories, that is y= (Y1,y2,..., YK ) y= ({y_{1},y_{2},..., y_{k}}).

If a new sample X is coming now, how do we judge its category? From a probability point of view, the problem is that given x, the probability of which category it belongs to is the greatest. Then the problem is converted to the largest one in the solution of P (y1|x), P (y2|x),..., P (yk|x) p (y_{1}|x), P (y_{2}|x),..., P (y_{k}|x), which is the output with the greatest probability of posteriori: Argmax

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.