1. Naive Bayes hypothesis
To deal with this situation where the dimensionality is too high, we make a hypothesis that each dimension of x is independent of each other. This is also the naïve Bayes hypothesis.
Depending on the conditions of the independent distribution, we can easily write P (d| C), as follows:
P (d/c) =∏p (ti/c)
D represents the document, TI represents each word in the document, and C represents the class.
2. Naive Bayesian classifier
Naive Bayesian classifier is a supervised learning, there are two common models, polynomial models (multinomial model) and Bernoulli models (Bernoulli model).
The prior probability is measured in the "Introduction to information retrieval" in terms of the number of documents under Class C, while some blogs are differentiated by the following two forms.
2.1. Polynomial model
In a polynomial model, a document d= (T1,t2,..., tk) is created, and TK is the word that appears in the document, allowing repetition:
Priori probability P (c) = total number of words under Class C/whole training sample.
Class conditional probability P (tk|c) = (the sum of the number of occurrences of the word tk in each document in Class C)/(the total number of words under Class C +| v|). V is the word list of the training sample (that is, the word is extracted, the word appears multiple times, and only one is counted), | V| is the number of words that the training sample contains.
P (tk|c) can be seen as the evidence that word TK provides in proving that D belongs to Class C, while P (c) can be considered as a percentage of the overall size of category C (how much possible).
2.2. Bernoulli model
P (c) = Total files under Class C/number of files for the entire training sample
P (tk|c) = ( number of files with Word TK under Class C + 1)/(total number of words under Class C +2)
The computational granularity of the two is different, the polynomial model takes the word as the granularity, the Bernoulli model takes the document as the granularity, therefore both the prior probability and the class conditional probability are calculated differently.
Category (i): Naive Bayesian text classification