About increased rates of convergence through learning rate adaptation article understanding

Source: Internet
Author: User

Original address: Http://www.researchgate.net/profile/Robert_Jacobs9/publication/223108796_Increased_rates_of_convergence_ Through_learning_rate_adaptation/links/0deec525d8f8dd5ade000000.pdf

Have looked at cnn,rbm,sae and other networks and algorithms, all the network in the training needs a learning rate, has always felt that this amount is set to a fixed value can, now found that this amount can also be changed and learning.

The first neural network mentioned in the article learning rate, but I think it is enlightening. In this paper, if the error function is regarded as a multivariable function, and each parameter corresponds to a variable, the speed of this function varies in the WI direction of each parameter, and if the error function is not a circle, The negative gradient direction does not point to the minimum value (this draws an ellipse as a tangent), so different learning rate should be used.

Then the author proposes a heuristic method that is in the neural network, if a parameter each time the derivative of the symbol remains unchanged, indicating that it has been moving along the positive direction, then should increase learning rate to reach the minimum point faster, if a parameter each time the derivative of the symbol often change, Indicates that it has crossed the minimum point and is swinging near the minimum point, then the learning rate should be reduced to stabilize it.

Then is the algorithm, one is the momentum method, so that the derivative of the front can affect the subsequent parameter changes, so that the parameters have been moving in one direction to change, otherwise reduce the parameter changes.

The second is Delta-delta learning rule, which ε (T+1) is based on

, the result of the second formula is the derivative of learning rate, which can be updated with SGD learning rate. But obviously, this will have a flaw, the end of the second equation is the result of two derivative multiplied, will be relatively small, so this method is not good, there is an improved version.

This function combines the two principles and prevents learning rate from being reduced to less than 0, and the linear increase does not increase too fast.

Hope this blog is helpful to others, thank you.

About increased rates of convergence through learning rate adaptation article understanding

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.