Optimization theory • Nonlinear least squares
tags (space-delimited): Math
The nonlinear least squares problem is one of the most easily encountered optimization problems in elliptic fitting, this paper mainly introduces the basic analysis of nonlinear two-squares 1. What is the least squares problem
The objective function can be written as the optimization problem of the square sum of M functions
where each function fi (x) f_i (x) is a function to optimize vector x x. 2. Nonlinear least squares problem when fi (x) f_i (x) is a nonlinear function of x x, it is a nonlinear optimization problem at this point, we need to use Taylor first-order expansion approximation fi (x) f_i (x) 2.1 fi (x) f_i (x) First-order approximation
Set X (k) x^{(k)} to be the K-th approximation of the solution x x, where fi (x) f_i (x) is first expanded and the first-order expansion value is φi (x) \varphi_i (x)
The upper type is sorted and
As you can see, the first-order approximate φi (x) \varphi_i (x) is a linear function of the vector x x to be optimized: Is fi (x) f_i (x) Gradient "fi (x) f_i (x) to derivation of vector X" at x (k) x^{(k)} The value is fi (x) f_i (x) in X ( k) approximation of the value 2.2 F (x) at x^{(k)}
After obtaining the first order approximation of fi (x) f_i (x), we can calculate the first order approximation of f (x), which is approximately Φ (x