Category (i): Naive Bayesian text classification

Source: Internet
Author: User

1. Naive Bayes hypothesis

To deal with this situation where the dimensionality is too high, we make a hypothesis that each dimension of x is independent of each other. This is also the naïve Bayes hypothesis.

Depending on the conditions of the independent distribution, we can easily write P (d| C), as follows:

P (d/c) =∏p (ti/c)

D represents the document, TI represents each word in the document, and C represents the class.


2. Naive Bayesian classifier

Naive Bayesian classifier is a supervised learning, there are two common models, polynomial models (multinomial model) and Bernoulli models (Bernoulli model).

The prior probability is measured in the "Introduction to information retrieval" in terms of the number of documents under Class C, while some blogs are differentiated by the following two forms.

2.1. Polynomial model

In a polynomial model, a document d= (T1,t2,..., tk) is created, and TK is the word that appears in the document, allowing repetition:

      1. Priori probability P (c) = total number of words under Class C/whole training sample.

      2. Class conditional probability P (tk|c) = (the sum of the number of occurrences of the word tk in each document in Class C)/(the total number of words under Class C +| v|). V is the word list of the training sample (that is, the word is extracted, the word appears multiple times, and only one is counted), | V| is the number of words that the training sample contains.

P (tk|c) can be seen as the evidence that word TK provides in proving that D belongs to Class C, while P (c) can be considered as a percentage of the overall size of category C (how much possible).


2.2. Bernoulli model

P (c) = Total files under Class C/number of files for the entire training sample

P (tk|c) = ( number of files with Word TK under Class C + 1)/(total number of words under Class C +2)



The computational granularity of the two is different, the polynomial model takes the word as the granularity, the Bernoulli model takes the document as the granularity, therefore both the prior probability and the class conditional probability are calculated differently.

Category (i): Naive Bayesian text classification

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.