Statistical learning Methods Hangyuan Li---4th chapter naive Bayesian method

Source: Internet
Author: User

The 4th Chapter naive Bayesian method naive Bayesian (Naive Bayes) method is based on Bayesian theorem and characteristic condition independent hypothesis classificationmethod. For a given training data set, it is first based on the feature condition independent hypothesis to learn the union of input/outputprobability distribution; Based on this model, the posterior probability of the given input x is obtained by using Bayes theorem.the output Y. 4.1 Study and classification of naive Bayesian method Basic Methodsnaive Bayesian method learning through training data setsJoint probability distributions of X and YP (x, y). specifically, learning tothe prior probability distribution and conditional probability distribution. Prior probability distributions Conditional probability distribution
The conditional probability distribution has the parameter of the number of numbers, and its estimation is not practical.the. naive Bayesian method is a hypothesis of conditional independence for conditional probability distribution. The Conditional independence hypothesis isThe features that are used for classification are conditionally independent under the conditions determined by the class. Naive Bayesian method actually learns the mechanism of generating data, so it belongs to the generation model. Naive Bayes method is a class decision based on the maximum posterior probability (MAP) criterion, and the posterior probability is the same as the denominator, and the classifier can be expressed as the maximum posteriori probability.equivalent to0-1 when the loss functionminimize the risk of expectation. Parameter estimation of 4.2 naive Bayes method Maximum likelihood estimationMaximum likelihood estimation of a priori probability:set J feature x(j)the set of possible values is,of conditional probabilitiesMaximum likelihood estimation:Summary algorithm: Bayesian estimation using a maximum likelihood estimate may occur with a probability value of 0 to be estimated, the method of solving this problem is to use Bayesian estimation. The Bayesian estimates for conditional probabilities are:
Lamda>=0 in the formula. is equivalent to assigning a positive number to the frequency of each value of a random variable.Regular Takelamda=1, called Laplace smoothing (Laplace smoothing). Similarly, the Bayesian estimation of a priori probability is:



From for notes (Wiz)



Statistical learning Methods Hangyuan Li---4th chapter naive Bayesian method

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.