In mathematics optimization problem, Lagrange multiplier method (named by mathematician Joseph Lagrange) is a method to find the extremum of multivariate function when its variable is constrained by one or more conditions. This method can transform an optimization problem with n variables and k constraint conditions into a solution with n + k variables of equations. This method introduces a new set of unknowns, the Lagrange multiplier , also known as Lagrange multipliers, or Lagrangian multiplication, which are the converted equations, The coefficients of each vector in a linear combination of a gradient (gradient) in a constrained equation.
For example, the maximum value of f ( x, y) = C is required , and C is a constant. First look at the picture
The Green Line marks the trajectory of the point that constrains G(x,y) = C , and is a contour line of g (x, y). The Blue Line is the contour of F , and F (x, y) takes D1, D2,..... DN respectively. The arrows represent gradients, parallel to the normals of the contour lines.
Suppose that the G (x, y) = c contour intersects with a high Line of f (x, y) = DN, which is the value of the feasible field that satisfies both the equality constraint and the objective function, but certainly not the optimal value, because the intersection means there must be other contours inside or outside the contour line, So that the value of the intersection of the new contour and the target function is larger or smaller, only to the contour tangent to the curve of the target function, it is possible to obtain optimal values, that is, the contour line and the target function curve at the point of the normal vector must have the same direction, so the optimal value must meet: f (x, y) gradient =λ* , λ is constant, indicating the left and right sides of the same direction. This equation is the result of the derivation of the parameters of the Lagrange function L (x,y,λ).
Let's first write the Lagrangian function,
L (x,y,λ) = f (x, Y) +λ* (g (x, Y)-c)
The solution L (x,y,λ) has a gradient of 0. L (x,y,λ) gradient is 0 o'clock, then L (x,y,λ) to the parameter λ of the bias of 0, that is g (x, y)-c = 0, also satisfies our constraints g (x, y) = c. This is the magic of Lagrange multiplier λ. In addition L (x,y,λ) has a gradient of 0 and must meet
The gradient of f (x, Y) =λ* g (x, y). If L (x,y,λ) has a gradient of 0 (x0, y0), then L (x,y,λ) takes the Extremum and the extremum point is f (x, y) at the extreme point of the constrained g (x, y) = c.
Study notes by Lagrange multiplier method