EM algorithm "Turn"

Source: Internet
Author: User

Mixed Gaussian model and EM algorithm

This discussion uses the desired maximization algorithm (expectation-maximization) for density estimation (density estimation).

As with K-means, given the training sample , we will show the implied category label. Unlike the hard designation of K-means, we first think that it satisfies a certain probability distribution, and here we think that satisfies the polynomial distribution, where there are k values {1,..., k} can be selected. And we think that after a given, the multi-valued Gaussian distribution is satisfied. Thus the joint distribution can be obtained.

The whole model is simply described as for each sample, we first extract a polynomial distribution from the K category, and then according to the corresponding K multivalued Gaussian distribution of a sample, .... The entire process is called a mixed Gaussian model. It is important to note that this is still an implied random variable. The model also has three variables and. The maximum likelihood estimate is. Log in as follows:

The maximum value of this equation cannot be solved by using the 0 derivation method previously used, because the result of the request is not the close form. But assuming we know each example, the formula can be simplified to:

At this point, we'll come back and take the derivative:

is the ratio in the sample category.

Is the sample characteristic mean of the class J,

is a covariance matrix of the characteristics of a class J sample.

In fact, when known, the maximum likelihood estimate is approximate to the Gaussian discriminant analysis model (Gaussian discriminant analytical models). The difference is that the Gda category Y is the Bernoulli distribution, where z is the polynomial distribution, and each sample here has a different covariance matrix, whereas Gda thinks there is only one.

Before we were given the hypothesis , it was actually not known. So what do we do? Considering the thought of EM as mentioned earlier, the first step is to guess the implied class variable Z, and the second step is to update the other parameters to get the maximum likelihood estimate. The use of this is:

Loop the following steps until convergence: {

(e-Step) for each I and J, calculate

(M step), update the parameters:

}

In e-step, we consider the other parameters as constants, the computed posterior probabilities, that is, the estimation of implied class variables. After estimating, the above formula is used to recalculate the other parameters, and when the maximum likelihood estimation is found, the value is wrong and needs to be recalculated again and again until it converges.

The specific calculation formula is as follows:

This equation uses the Bayesian formula.

Here we use instead of the preceding, from a simple 0/1 value into a probability value.

Comparison K-means can be found here using the "soft" designation, for each sample assigned to a certain probability of the class, while the calculation is also larger, each sample I have to calculate the probability of each category J. As with K-means, the result is still the local optimal solution. It is a good idea to take different initial values of other parameters for multiple calculations.

Although the convergence of EM is described qualitatively in the previous K-means, it is still not given quantitatively, and the derivation process of generalized em is still not given. The next article focuses on these topics.

1. Jensen Inequalities

Review some concepts in the optimization theory.

Set F is a function that defines the field as a real number, and if for all real numbers x, then f is the convex function.

When x is a vector, if it hessian the matrix H-type semi-positive definite (), then f is the convex function.

if or h>0, then the "F" is a strictly convex function.

The Jensen inequalities are expressed as follows:

If f is a convex function and x is a random variable, then

In particular, if f is a strictly convex function, then when and only if X is a constant, it is said to be true. Here we will shorthand for.

If you use a diagram, it will be clear:

In the figure, the real line f is the convex function, x is a random variable, the probability of 0.5 is a, and the probability of 0.5 is B. (Just like a coin toss). The expected value of X is the median of A and B, which can be seen in the figure.

When F is a (strict) concave function, and only if- F is a (strict) convex function.

When the Jensen inequality is applied to the concave function, it is opposite in the direction of the equal sign.

2. EM algorithm

(1), Given the training sample is , the sample is independent, we want to find each sample implied category Z, can make P (x,z) the largest. The maximum likelihood estimates for P (X,Z) are as follows:

The first step is to take the maximum likelihood logarithm , and the second step is to find the probability of joint distribution for each possible category Z of each sample . but the direct request generally difficult, because there is a hidden variable z exists, but the general determination of Z, the solution is easy.

(2), em is an effective method to solve the problem of implicit variable optimization. Since it cannot be maximized directly, we can constantly build the nether (e-Step) and then optimize the nether (M-step). This sentence is more abstract, see the following:

For each example I, let's say that the sample implied variable z of a distribution , satisfies the condition is

。 (if z is continuous, then is the probability density function, the summation symbol needs to be replaced with the integral symbol). For example, to cluster students in the class, assuming that the hidden variable z is height, then it is a continuous Gaussian distribution. If the hidden variables are male and female, then Bernoulli is distributed.

The following formula can be obtained from the previous description:

(1) to (2) more directly, is the numerator denominator multiplied by an equal function. (2) to (3) using the Jensen inequality, taking into account the concave function (the second derivative is less than 0), and

is the expectation. (Recall the lazy statistician rule in the expected formula, as follows)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Set Y is the function of the random variable X (g is a continuous function), then

(1) x is a discrete random variable, its distribution law is, k=1,2,...。

If absolutely convergent, there is

(2) x is a continuous type of random variable, its probability density is, if absolute convergence, then there is

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Corresponds to the above problem, Y is,x is , Yes , G is the map to . This explains the expectation in the formula (2), and then the Jensen inequality when it is based on the concave function:

Can get (3)

(3), This process can be seen as the right to seek the lower bound. for the choice, there are many possibilities, the kind is better ?

If given, then the value is determined by the and. We can adjust these two probabilities to be the nether rising to approximate the real value, then when is the adjustment good? When an inequality becomes an equation, it means that our adjusted probabilities are equivalent. According to this idea, we need to find the conditions for the equation to be established. According to the Jensen inequality, in order for the equation to be set up, the random variable needs to be converted into a constant value, which is:

C is constant and does not depend on. Further derivation of this formula, we know, then there is, (multiple equality numerator denominator added constant, this think each sample of two probability ratio is C), then there is the following formula:

At this point, we have released after fixing other parameters , the calculation formula is the posterior probability, solves the problem of how to choose. This step is the e-step, which establishes the Nether. The next M-step is the lower bound ( the Nether can be adjusted even larger after fixing), after a given, tuning , and going to the maximum .

(4), then the general EM algorithm steps are as follows:

Loop repeats until convergence {

(e step) for each I, calculate

(M-Step) calculation

(5), so exactly how to ensure that em convergence? The assumption and is the result of the EM-T and t+1 iterations. if we prove that, in other words, the maximal likelihood estimate is monotonically increasing, we will eventually reach the maximum likelihood estimate. below to prove that after the selection we get e step

This step guarantees that the equation in the Jensen inequality is established at the given time, i.e.

Then the M step, fixed, and will be regarded as a variable, the derivation of the above, get, so after some deduction will have the following formula is established:

explain step (4) to get when, just maximize , namely the lower bound, without having the equation set up, the equation is set up only in fixed , and follow e-step to get to be established.

Besides, according to the formula we got earlier, for all the and to be set up

Step (5) uses the definition of M-step, M-step is to adjust to, make the nether maximum. Therefore (5) is established, (6) is the result of the previous equation.

This proves that there will be a monotonic increase. One way of convergence is no longer changing, and there is a small change in the range.

Explain again (4) (5) (6). First (4) all the parameters are satisfied, and its equation set up conditions only fixed, and adjust the Q when the establishment, and the Step (4) is only fixed q, adjustment, can not guarantee that the equation must be established. (4) to (5) is the definition of M-step,(5) to (6) is the condition of the equality established by the preceding e-step. This means that e-step will pull the nether to the a specific value (here ) The same height, and at this time found that the nether can still rise, so after M-step, the nether is pulled up, but not to Another particular value is the same height, and then E moves the nether to the same height as this particular value, repeating until the maximum value .

If we define

From the preceding derivation we know thatem can be regarded as J-coordinate ascending method, E-step fixed, optimized, M- step fixed optimization .

3. Re-examine the mixed Gaussian model

We already know the essence of EM and the derivation process, and look again at the mixed Gaussian model. parameters of the mixed Gaussian model mentioned earlier and the calculation formulas are based on many assumptions, some of which are not explained. For the sake of simplicity, here in M step only The derivation method given and.

E-step is simple, according to the general EM formula to get:

The simple explanation is that the probability of the implicit class J for each sample I can be computed by a posteriori probability.

In M-step, we need to maximize the maximum likelihood estimate after fixing, i.e.

This is the type of k that will unfold after the case, unknown parameters and.

Fixed and, on derivative

equals 0 o'clock, gets

This is the update formula in our previous model.

The updated formula is then deduced. Look what you got before.

After and determined, a string above the molecule is constant, and the formula that actually needs to be optimized is:

Need to know is, also need to meet certain constraints is.

We are familiar with this optimization problem, and we construct the Lagrange multiplier directly.

One more thing, but this will be automatically met in the formula you get.

Derivative,

equals 0, gets

In other words, to use again, get

And so it magically gets.

Then you get the update formula in M step:

The derivation is similar, but slightly more complex, after all, is the matrix. The results are given in the previous mixed Gaussian model.

4. Summary

If the sample is considered as an observation value, and the potential category is considered as a hidden variable, then the clustering problem is the parameter estimation problem , but the parameters of the clustering problem are divided into the implicit class variables and other parameters, which is like finding the extremum of a curve in the X-y coordinate system, but the curve function cannot be directly derivative. So what gradient descent method does not apply. However, after one variable is fixed, the other one can be obtained by derivation, so we can use the coordinate rising method to fix one variable at a time, to find the other extremum, and then to approximate the extremum gradually . Corresponding to EM, the e-step estimates the implied variable, m-step estimates the other parameters, alternating the extremum to the maximum. There is also the concept of "hard" designation and "soft" designation, "soft" designation seems more reasonable, but the computational amount is larger, "hard" designation is more practical in some cases such as k-means (it would be cumbersome to keep a sample point to all other centers).

In addition, the proof method of the convergence of EM is very cow, can use the concave function of log, can also think of the method of creating the nether, flattening the lower bounds of the function, and optimizing the lower bound to gradually approximate the maximum value. And each iteration is guaranteed to be monotonous. The most important thing is to prove that the mathematical formula is very delicate, the probability that the numerator denominator is multiplied by Z becomes the expectation to set up the Jensen inequality, how the predecessors all thought.

In the Mitchell machine learning also cited an example of EM application, it is clear that the class is to put the height of students together, asked to gather into two classes. These heights can be seen as the Gaussian distribution of male height and the Gaussian distribution of female height. Therefore, how to estimate each sample is a male or female, and then in determining the situation of male and female, how to estimate the mean and variance, which also gives a formula, interested can be consulted.

EM algorithm "Turn"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.