A summary comparison of the most comprehensive optimization methods for deep learning (Sgd,adagrad,adadelta,adam,adamax,nadam)

Source: Internet
Author: User
Tags comparison

Original link Address: http://blog.csdn.net/u012759136/article/details/52302426

This article only to some common optimization methods for the visual introduction and simple comparison, the details of various optimization methods and formulas have to seriously chew the paper, here I will not repeat. 1.SGD

SGD refers to Mini-batch gradient descent, about batch gradient descent, stochastic gradient descent, and mini-batch gradient descent The specific difference will not elaborate. SGD now generally refers to mini-batch gradient descent.

SGD is the most common optimization method for every iterative calculation of the mini-batch gradient and then updating the parameters. That

gt=∇θt−1f (θt−1) g_t=\nabla_{\theta_{t-1}}{f (\theta_{t-1})}
ΔΘT=−Η∗GT \delta{\theta_t}=-\eta*g_t

Wherein, η is the learning rate, the GT is gradient

SGD is completely dependent on the gradient of the current batch, so η can be understood to allow the current batch gradient to affect the parameter update

Cons: (just because of these shortcomings so many great gods have developed a variety of subsequent algorithms)

1. Choosing the right learning rate is difficult
2. Use the same learning rate for all parameter updates. For sparse data or features, sometimes we might want to update faster for infrequently occurring features, for frequently occurring feature updates are slower, and SGD is less able to meet the requirements
3/SGD is easy to converge to local optimum, in some cases may be trapped in the saddle point "but in the appropriate initialization and learning rate settings, the impact of the saddle point is not so big" 2.Momentum

Momentum is the concept of simulating the momentum in physics, accumulating the previous momentum to replace the true gradient. The formula is as follows:

MT=Μ∗MT−1+GT m_t=\mu*m_{t-1}+g_t
ΔΘT=−Η∗MT \delta{\theta_t}=-\eta*m_t

Where μ is the momentum factor
Characteristics:

1. At the beginning of the descent, using the last parameter update, the descent direction is consistent, multiply the larger μ can be very good acceleration
2. In the middle and late fall, when the local minimum value back and forth, gradient→0,μ makes the update amplitude increase, jump out of the trap
3. When the gradient changes direction, μ can reduce the update

In summary, the momentum can accelerate SGD in the relevant direction, suppressing oscillations, thus accelerating convergence 3.Nesterov

Nesterov a correction to the gradient update to avoid moving too fast and increasing sensitivity.
Expand the formula in the previous section to:
ΔΘT=−Η∗Μ∗MT−1−Η∗GT \delta{\theta_t}=-\eta*\mu*m_{t-1}-\eta*g_t

As can be seen, mt−1 m_{t-1} does not directly change the current gradient GT g_t, so the Nesterov improvement is to let the previous momentum directly affect the current momentum. That
GT=∇ΘT−1F (θt−1−η∗μ∗mt−1)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.