Initial purpose
The sample is divided into K classes, in fact, is to ask for a sample of the implied category Y, and then use the implied category to classify X. Since we do not know the class Y beforehand, we can first assume a Y for each example, but how do we know if the assumptions are correct? How to evaluate what is supposed to be good or bad?
We use the sample's maximum likelihood estimate to measure, here is the joint distribution of X and y p (x, y). If Y is found to be the largest of P (x, y), then the y we find is the best category for sample X, and X is clustered in handy. But the first time we specify the y does not necessarily make p (x, y) the largest, and P (x, y) also depends on other unknown parameters, of course, given the case of Y, we can adjust the other parameters so that P (x, y) maximum. But after adjusting the parameters, we find that better y can be specified, then we re-specify Y and then calculate the maximum parameter p (x, y), iterate until there is no better y to specify.
This process has several difficulties:
How does the first assume Y? Is each sample hard assigned a Y or different Y has different probabilities, how the probability is measured.
The second is how to estimate p (x, Y), p (x, y) may also depend on many other parameters, how to adjust the parameters inside to make P (x, y) the largest.
The idea of EM algorithm:
The e-step is to estimate the expected value of the implied class Y, and M-step adjusts the other parameters so that the maximum likelihood of P (x, y) can be reached in the case of a given class Y. Then, in the case of other parameters, re-estimate y, cycle, until convergence.
From the K-means we can see that it is actually the embodiment of EM, e step is to determine the implied class variables, M-Step update other parameters to minimize J. The implied class variable specifies a method that is special, a hard designation, and a hard selection from the K category, rather than assigning a different probability to each category. The general idea is still an iterative optimization process, there are objective functions, there are parametric variables, just a number of hidden variables, to determine other parameters to estimate the implied variables, and then determine the hidden variables to estimate other parameters until the objective function is optimal.
The EM algorithm is like this, assuming we want to know that A and B two parameters, in the start state both are unknown, but if you know the information of a can get B information, in turn know B also got a. Consider first giving a certain initial value, in order to get the estimate of B, and then starting from the current value of B, re-estimate the value of a, the process continues until convergence.
EM means "expectation maximization."
http://blog.csdn.net/zouxy09/article/details/8537620
"Machine learning" K-means Clustering algorithm and EM algorithm