Levenberg-marquardt iterative (LM algorithm)-Improved Newton method

Source: Internet
Author: User

1. Preface

A, for engineering problems, is generally described as: from some measured values (observations) x in the estimation of parameters p? That is, x = f (p),

where x is the measured value of the vector, the parameter p is to be calculated, in order to allow the model to adapt to the general scene, here P is also a vector.

This is a function solving problem, can be solved by using Guass-newton method, LM algorithm is the improvement of Newton method.

C, if function f is a linear function, then this problem becomes the least squares problem (see my other blog: least squares),

D, the LM method and Newton method explained in this blog are mainly used in the case of function f as a nonlinear function.

2, x = f (p) problem solving by Newton method

when iterating to the K-time, the parameter is obtained, which is the residual error:

For f (P), the first-order Taylor formula is expanded, J is the Jacobi (Jacobian) matrix, because the parameter p is a vector, so the derivation of P is the partial derivative of P element:

Calculate the residual error for k+1:

Through the iterations of the K-to-k+1-second iteration,

It can be found that the nonlinear problem has been transformed into linear solution, then the least squares solution is:

The parameter p for the k+1 Times is:

3. Weighted Newton Iteration

in the Newton method, all the dependent variables are weighted equally, and in addition, the dependent variables can be weighted using a weighted matrix.

For example, when the measurement vector x satisfies a Gaussian distribution of a covariance matrix, and wants to minimize the mahalanobis distance.

When this covariance matrix can be diagonal, it means that x coordinates are independent of each other.

When the covariance matrix is a positive definite symmetric matrix, the normal becomes:

Note: Markov distance

4. Levenberg-marquardt Iteration (LM algorithm)

The LM algorithm is an improvement to the Newton iteration.

The formal equation (4) can be simplified to write:

The LM algorithm changes the above to:, where the diagonal element of n is multiplied, the non-diagonal element is unchanged

The setting policy is: When initializing, it is usually set to.

If the error resulting from the normal equation of the solution increment is reduced, then the increment is accepted and divided by 10 before the next iteration.

Conversely, if the value leads to an increase in error, then multiply by 10 and then new to the incremental normal equation, continue this process until one of the errors is reduced.

The incremental normal equations are solved differently until an acceptable one is obtained.

Intuitive interpretation of the LM algorithm: This method is essentially the same as the Newton iteration when the very hour.

When very large (essentially greater than 1), the non-diagonal elements in this case become unimportant relative to the diagonal elements, at which point the algorithm tends to decline the method.

The LM algorithm moves seamlessly between the Newton iteration and descent methods, and the Newton method will quickly converge the algorithm near the solution domain, and the descent method makes the algorithm

The cost function is reduced when the operation is difficult.

5, Newton Method (LM method) Two applicable scenarios of conversion

A, in the previous blog Newton (Newton Method Newton method) describes the Newton method applicable to two scenarios: 1, the function of the solution; 2, the optimal solution of the objective function

In the previous blog, f (x) is equivalent to the X–F (p) In this blog, the previous blog is to seek x, this blog is known as X, p, just a different expression.

b, these two scenarios can sometimes be converted to one another:

For example: The function solves the problem f (x) = 0, which can also be considered as solving min| | F (x) | |, where | |.| | Represents a two-norm, i.e.

For example: Objective function optimization problem min | | F (x) | |, when the theoretical optimal solution of this optimization problem is 0 o'clock, then this problem can also be converted to the solution f (x) = 0

Levenberg-marquardt iterative (LM algorithm)-Improved Newton method

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.