Naive Bayesian (Naive Bayes) is a simple classification algorithm, its classic application cases are well-known: text categorization (such as spam filtering). A lot of textbooks from these cases, this article does not repeat these content, and focus on the theoretical deduction (in fact, not to be "theoretical" scare), three common models and their coding implementation (Python).
If you are not interested in the theoretical derivation process, you can go directly to the three common models and coding implementation part, but I suggest you look at the theoretical basis of the section.
In addition, all of the code in this article can be obtained from my GitHub 1. The theoretical basis of naive Bayes
Naive Bayesian algorithm is a classification method based on Bayesian theorem and independent hypothesis of feature conditions.
The Bayesian theorem and the independent hypothesis of characteristic conditions are two important theoretical bases of naive Bayes. 1.1 Bayes theorem
First look at what is conditional probability .
P (a| b indicates that event B has occurred, and the probability of event A is called the conditional probability of event A in event B. The basic solution formula is: P (a| b) =p (AB) p (b) p (a| b) =\frac{p (AB)}{p (b)}
Bayesian theorem is based on conditional probability, through P (a| B) to ask P (b| A):
P (b| A) =p (a| b) p (b) p (A) p (b| A) =\frac{p (a| b) P (b)}{p (A)}
Incidentally, the denominator P (A) in the upper formula can be decomposed according to the full probability formula:
P (A) =∑ni=1p (Bi) p (a| Bi) P (A) =\sum_{i=1}^{n}p (B_{i}) p (a| B_{i}) 1.2 feature conditions independent assumption
This section begins with the theory of Naive Bayes, from which you will have a deep understanding of what is a feature condition independent hypothesis.
Given a training dataset (X,y), where each sample X includes an n-dimensional feature, that is, x= (x1,x2,x3,..., xn) x= ({x_{1},x_{2},x_{3},..., X_{n}), and the class tag collection contains k categories, that is y= (Y1,y2,..., YK ) y= ({y_{1},y_{2},..., y_{k}}).
If a new sample X is coming now, how do we judge its category? From a probability point of view, the problem is that given x, the probability of which category it belongs to is the greatest. Then the problem is converted to the largest one in the solution of P (y1|x), P (y2|x),..., P (yk|x) p (y_{1}|x), P (y_{2}|x),..., P (y_{k}|x), which is the output with the greatest probability of posteriori: Argmax