Generative Learning Algorithms

Source: Internet
Author: User

Generative algorithm models how the data is generated in order to categorize a signal. It asks the question:based on my generation assumptions, which category was most likely to generate this Signal?discrimina tive algorithm does is generated, it simply categorizes a given signal.

Discriminative:

Try to find the difference between classes, and then find decision boundary, the most probable area to divide data. He is through Direct learning to $p (y|x) $ (e.g. logistic regress) or $x \rightarrow y\in (0,1,..., k) $ (e.g. Perceptron algrithm)

Generative:

Take another way, first by a priori knowledge prori-knowledge get $p (x|y), P (Y) $ Then, by Bayes rule: $p (y|x) = \frac{p (x|y) p (y)}{p (x)} $ to obtain $p (Y|X) $, where $p ( x) =p (X|y=1) p (y=1) +p (x|y=0) p (y=0) $. This process can be seen as a priori distribution to derive posterior distribution. Of course, the denominator does not need to be considered in cases where it is only possible to determine the likelihood size, namely: $$\arg\max_yp (y|x) = \arg \max_y\frac{p (x|y) p (y)}{p (x)}\\=\arg\max_yp (x|y) p (y) $$

Prior knowledge acquisition of $p (X|y) and P (Y) $ is the process of obtaining parameters through an existing sample of training data.
1. First assume a model, that is, the model of the sample distribution (Bernoulli or Gaussian distribution)
2. Then estimate the parameters by likelihood estimation likelihood function
3. Finally through the Bayesian formula export $p (Y|X) $

Example

DataSet: $X = (x_1,x_2) $, $Y \in{0,1}$

  1. First we assume that the condition distribution of the data $p (x|y) $ obeys the multivariate Gaussian normal distribution (multivariate normal distribution), then the model form is as follows: $ $y \sim \textrm{bernoulli} ( \phi) \ x|y=0 \sim \mathcal{n} (\mu_0,\sigma) \ x|y = 1\sim \mathcal{n} (\mu_1,\sigma) $$
  2. The parameters are then estimated by the maximum likelihood estimation (max likelihood estimate). First write the log likelihood function: $$\ell (\PHI,\MU_0,\MU_1,\SIGMA) = Log\prod_{i=1}^{m}p (x^{(i)},y^{(i)},\mu_0,\mu_1,\sigma) \ \ =log\prod _{I=1}^MP (x^{(i)}|y^{(i)};\mu_0,\mu_1,\sigma) p (y^{(i)};\phi). $$
    Then the likelihood function $\ell$ maximization, namely the point that solves the likelihood function to the parameter derivative of zero: $$\phi=\frac{1}{m}\sum_{i=1}^{m}1\{y^{(i)}=1\} \ \mu_0= \frac{\sum_{i=1}^{m}1\{ y^{(i)}=0\}x^{(i)}} {\sum_{i=1}^m1\{y^{(i)}=0\}} \ \mu_1= \frac{\sum_{i=1}^{m}1\{y^{(i)}=1\}x^{(i)}} {\sum_{i=1}^ m1\{y^{(i)}=1\}} \ \ \sigma = \frac{1}{m}\sum_{i=1}^m (x^{(i)}-\mu_{y^{(i)}) (x^{(i)}-\mu_{y^{(i)}) ^t$$ get the estimated value of the parameter $ (\ PHI,\MU_0,\MU_1,\SIGMA) $, that is to get the distribution function $p (x|y) $. In contrast to the above diagram, $\mu_0,\mu_1$ is a two-dimensional vector, where the position of the graph is the center point of the two normal distributions, and the $\sigma$ determines the shape of the multivariate normal distribution.
    ! [Enter a description of the picture here] [2]
    from this step we can see that the way to get the parameters is "learning", that is, from a large number of samples-prior knowledge to estimate the model, so it is very natural logic. However, the strict basis is the law of large numbers (LLN), the law of large numbers, the proof is wonderful, You can find your own data.

  3. Compare $p (y=1|x) $ and $p (y=0|x) $ with Bayesian formulas to discriminate class properties.

Generative Learning algorithms

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.