The interpretation and comparison of gradient descent method and Newton's method

Source: Internet
Author: User

1 Gradient Descent method

We use the gradient descent method to find the x corresponding to the minimum value f (x) of the objective function, so how do we find the minimum point x? Note that our X is not necessarily one-dimensional, it can be multidimensional, it is a vector. Let's start with the F (x) Taylor:

The α here is the learning rate, which is a scalar, which represents the amplitude of the x change; D is the unit step, a vector, a direction, a unit length of 1, which represents the direction of the X change. What do you mean? That is, we are here to find the most value, not all of a sudden the smallest corresponding to the X, but a little bit of iteration, approximation of the minimum value corresponding to the X, each iteration of the new X is X ' is x+αd (the blue dot can be X '), (note here the αd is a vector, here is the vector addition, there is a ) (Take two dimensions as an example):

So what do we want for each iteration of X '? The requirement is that each iteration of the new X ' cause the minimum, that is, all X ' on top, the red box X ' makes F (x ') the smallest, and we are going to find that X ', and then iterate continuously, (if you think the x amplitude of each iteration is small, adjust that learning rate α, the amplitude is large, We find the X that approaches the minimum of f (x). Let's see how to find the New X ' in each iteration.

We want to find the minimum value of f (x+αd), we think O (α) (that is, two derivative and above) is infinitesimal, negligible. (This decision is also the difference between the gradient descent method and Newton's law, the reason below) then the problem becomes, we want the smallest,

α is a coefficient, which is the inner product of two vectors, namely |g|*|d|*cosθ,

So that is the gradient vector and D vector angle is 180°, take the minimum value:-| g|*|d|. The gradient vector has an angle of 180° from the vector d.

So our αd value is-αg, so X ' =x-αg.

Next, repeat the above step iteration. until it converges.

2 Newton's method

The steepest descent method above only uses the gradient information, i.e. the first derivative information of the objective function, and Newton's Law uses the second derivative information. Let's start with the F (x) Taylor:

Each dimension of the G is f (x) for each dimension of X to find the first order partial derivative (so G is a vector), G is f (x) each dimension of the variable X for each dimension of the second derivative (so G is a matrix, called Hessian Matrix, (right)). This time with the formula above just slightly changed the form, the content is the same. The XK here represents the original X-point, and X represents the new X-point (which is the X ' above). Here we make the d=x-xk, that is, each iteration on the unit vector D on the X change (no α this amplitude, D is not necessarily 1, here is not the number of people to make the step, but the number of the calculation is how much). When we find D, we know the new X, (left):

In each iteration, we still have to find every x of the desired red box by minimizing the F (x) per time.

This time, by deriving the derivative of f (X) = 0, the resulting x is the X of the red box you want for each iteration. That

Because we know the x=xk-d, so the new x gets it. Then continue iterating until it converges.

The interpretation and comparison of gradient descent method and Newton's method

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.