When training a network, the initial weights of the network are usually initialized according to a certain distribution, such as Gaussian distribution. Comparison of the performance impact of the initialization weight operation on the final network
Large, appropriate initial weights of the network can make the loss function converge faster in the training process, thus obtaining better optimization results. But randomly initialized by some sort of distribution.
Network weights, there are some uncertainties, and there is no guarantee that each initialization operation can make the initial weight of the network in a suitable state. Improper initial weights can be
Can make the loss function of the network fall into the local minimum value in the training process, and cannot reach the global optimal state. Therefore, how to eliminate this uncertainty is to train the depth of the network is a must
A problem to be addressed.
Momentum momentum can solve this problem to some extent. Momentum Momentum is based on the energy conversion principle between the potential and kinetic energies of physics.
When the momentum momentum is larger, the energy of its conversion to potential energy becomes larger, and the more likely it is to get rid of the local depression and enter into the global concave domain. Momentum Momentum Master
To be used when the weights are updated.
In general, when updating weights, neural networks use the following formula:
w = w-learning_rate * DW
After introducing momentum, the following formula is used:
v = mu * v-learning_rate * DW
w = w + V
where V is initialized to 0,mu is a set of variables, the most common setting value is 0.9. This can be understood as: if the last momentum () and the
The negative gradient direction is the same, and the magnitude of this decrease will increase, thus accelerating convergence.