Machine Learning Training Algorithm (optimization method) Summary--gradient descent method and its improved algorithm

Source: Internet
Author: User

Introduce

Today will say two questions, first, suggest Bigfoot more look at Daniel's blog, Can rise posture ... For example:

1, focusing on language programming and application of the Liao Xuefeng

https://www.liaoxuefeng.com/

2, focus on the tall algorithm and open Source Library introduction of Mo annoying

https://morvanzhou.github.io/

Second, deepen the understanding of machine learning algorithms.

Personal understanding: Classical machine learning algorithms, such as SVM, logistic regression, decision tree, naive Bayesian, neural network, AdaBoost, and so on, their most essential difference is the classification of ideas (prediction of the expression of y) is different, some are based on probabilistic models, some dynamic planning. The difference of appearance is the last loss function, some of the hinge loss function, some of the cross-entropy loss function, some square loss function, some of the exponential loss function. Of course the above loss function is empirical risk, for structural risks need to be added regularization (L0,L1 (Lasso), L2 (Ridge)). The so-called training is actually the loss function optimization process, here can have different optimization methods, these methods are not part of the machine learning algorithm, which belongs to the convex optimization or heuristic optimization algorithm. and the different optimization (training, learning) algorithm effect is different, the outstanding performance is

1. Different optimization algorithms adapt to different scenarios (large-scale data, deeper and more complex networks, sparse data, high convergence rate), etc.

2. Different optimization algorithms can solve specific problems: The learning rate is fixed, the convergence is slow near the extremum point, and the convergence fluctuation is large.

3, heuristic optimization algorithm can be used to find the most advantages of the global, to avoid falling into local optimal points and saddle points, but convergence is too slow.

today summarizes the gradient descent method and its improved algorithm

The first part is the previous doubts: in the derivation of gradient descent method, the derivation of similar Newton method, the feasibility to be verified ...

The essence is: The gradient descent method only says the direction of descent-the steepest direction, how much each drop is not specifically given. Newton's method or my derivation gives a specific descent, but Newton's method is a variable, that is, the current function value, and my algorithm is a fixed value. Take a look at the second article of reference.


The second part and the third part are the introduction of gradient descent method and its improved algorithm: here only to talk about the adaptation of the scene, the specific deduction suggested to read the paper or book, blog not optimistic about



The forth part is to compare the difference between the batch gradient descent and the stochastic gradient descent, taking the linear regression as an example: the difference of the cost function



Part five, intuitively feel the optimal process of different optimization algorithms





Reference article:

1, Deep Learning optimization Algorithm Summary

2. Some of the most common optimization methods in machine learning

3. Summary of Gradient descent optimization algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.