Suppose you want to classify emails (SPAM/non-spam ). To use vectors to describe our emails, we can save every word in all the training sets. There are n different words in total (for example, there are 50000 different words in total, generally, some words called STOP-word are not counted, because they generally do not provide useful information, such as the, A, and is ), then an email can be represented by an n-dimensional vector. If there is a word in the email (The I word in the dictionary), the I-dimension of the vector is 1, otherwise it is 0, for example:
Class labels y have only 0 and 1 values, that is
For a new mail, determine whether it is a spam email. We calculate P (y = 1 | X) and P (y = 0 | X) to see which one is big. Bayesian classification uses Bayesian formula to calculate P (Y | X ).AlgorithmIt actually belongs to generative learning algorithm. By Bayesian formula (to put it bluntly, a anterior is used for posterior calculation ):
P (Y) is calculated by dividing the total number of training samples in a certain type of training set by the total number of training samples. For example, y = 1 has M1 and training samples have M, then P (y = 1) = m1/m. However, I still cannot figure out the p (x | Y) computation.
Naive Bayes hypothesis: P (x1, x2 ,.., XN | y) = P (X1 | Y )... P (XN | y) (x1, x2 ,..., XN is the component of X, that is, the condition is independent. When I! When J is used, P (XI | y, XJ) = P (XI | Y). If y is specified, the occurrence of Xi is not unrelated to XJ, therefore, it is generally possible to write like this (the following formula assumes that there are a total of C Class labels, not limited to two ):
Because molecules, No matter what y is, are p (x) (in this case, no matter the numerator is 0, because sometimes a word does not appear in the training set, but it appears in the mail, we may think that its probability is 0, which is solved by Laplace smoothing). so we only need to calculate the numerator and then take the class labels that make the numerator larger:
Alas, make up the rest later ~~~
Recently I 've been reading machine learning for hackers and made up someCode:
For convenience, extract the text of the first blank line without processing the mail header, and combine the paste into a character vector. The R code is as follows:
Get. MSG <-function (PATH) {con <-file (path, open = "RT", encoding = "Latin1") text <-readlines (CON) # The message always begins after the first full line breakmsg <-Text [seq (which (text = "") [1] + 1, length (text), 1)] close (CON) Return (paste (MSG, collapse = "\ n "))}