This article is transferred from http://blog.csdn.net/zdy0_2004/article/details/45798223, although the source paste is reproduced, transferred from http://www.cnblogs.com/xiaokangzi/p/4492466.html
This article in the reprint, added some of my errata and supplemented the missing code, in addition, there is still a little to understand is that the following appended to the source code used in the function
Binomial (1, mean[i]), has not yet made clear its concrete realization and source code, although has turned over a Gibbs sampling theory, but still did not make out this function realization. If anyone understands, also hope to be generous, in this first thanked.
Based on the energy model (EBM), the energy model will correlate the scalar energies of each configuration to the variables of interest. Learning to modify the energy function gives him the best performance in its shape. For example, we want to get the best parameters with lower energy. The probability model of EBM is defined by the probability distribution of the energy function, as follows: The rule coefficient Z is called partition function and the energy model of the physical system is similar. Based on the energy model, we can learn to process negative log likelihood training data by means of random gradient descent. As for logistic regression analysis, we will first define logarithmic likelihood and loss function as negative logarithm likelihood. Using random gradients to update parameter weights is a variety of parameters in the model.
Hidden neurons of the Ebms
In many cases, we do not see part of the hidden unit, or we want to introduce some invisible parameters to enhance the ability of the model. So we consider some visible neurons (still expressed as) and hidden parts. We can write our expressions like this:
(2)
In this case, the formula is similar to (1), we introduce the symbol, the free energy, which is defined as follows:
(3)
You can also write this,
The negative maximum-likelihood gradient of the data is expressed as.
(4)
Note that the above gradient is represented as two sections, which involve positive and negative parts. The positive and negative ones do not represent the symbols of each part of the equation, but rather indicate the effect on the probability density in the model. The first part increases the probability of training data (by reducing the corresponding free energy), and the second part of reducing the model to determine the descent gradient is often difficult because he involves computation. This is nothing more than the expectation in all configurations (conforming to the probability distribution generated by the model)!
The first step is to calculate the expectation of estimating a fixed number of model samples. Used to indicate negative partial gradients expressed as negative particles, expressed as. Gradients can be Xiewei:
(5)
We want to take the sample element of the (for example, we can do Monte Carlo method). Using the above formula, we can almost use the random particle algorithm to learn the EBM model. The only thing missing is how to extract these negative particles t. Statistically there are many sampling methods, and the Markov chain Monte Carlo method is particularly suitable for models such as restricted Boltzmann machines (RBM), a special EBM.