def classifynb (Vec2classify, P0vec, P1vec, PClass1):
P1 = SUM (vec2classify * P1vec) + log (PCLASS1)
P0 = SUM (vec2classify * P0vec) + log (1.0-PCLASS1)
If p1 > P0:
Return 1
Else
return 0
Attention:
P1vect = log (p1num/p1denom)
P0vect = log (p0num/p0denom)
>>> p0v
Array ([0.04166667, 0.04166667, 0.04166667, 0., 0.,
.
.
0.04166667, 0. , 0.04166667, 0. , 0.04166667,
0.04166667, 0.125])
>>> p1v
Array ([0., 0., 0., 0.05263158, 0.05263158,
.
.
0., 0.15789474, 0. , 0.05263158, 0. ,
0., 0. ])
P (W0,W1,W2). WN|CI) = P (w0|ci) p (W1|CI) p (w2|ci) ... p (WN|CI),
In this example, CI is divided into two categories of insulting and non-insulting, while w0,w1,w2. WN is the word vector (a summary of all the words in the document), p0v and p1v are calculated by the training document, if the document is an insulting document, the statistical document of the words in the word vector occurrences, the calculation of the probability vector p0v, the same calculation p1v.
The Bayes theorem is as follows:
P (ci|w)
= P (w|ci) p (CI)/P (W)
=p (W0,W1,W2. WN|CI) p (CI)/P (W)
=p (W0|CI) p (W1|CI) p (W2|CI): P (WN|CI) p (CI)/P (W)
Calculate a specific document W belongs to C0 (insulting document) or C1 (non-insulting document), statistics the probability of each word in this document in two different categories, quantified by the Bayesian formula, that is, each word in a particular document in the p0v or p1v to find the corresponding word probability, Multiply these probabilities, i.e. P (W0|CI) p (W1|CI) p (w2|ci). P (WN|CI), multiplied by P (CI), the final result is two probability values, the probability is the final category of a particular document.
Machine learning Combat-Learn to read Python code (5)