(iii) Nonlinear Programming _ Mathematics

Source: Internet
Author: User
Tags in degrees rand
Chapter three-nonlinear programming background of nonlinear programming

There is no general algorithm suitable for various problems in nonlinear programming, and each method has its own specific scope of application.

Identify options: First, collect information and data related to the problem, and identify the options for the problem based on a comprehensive familiarity with the problem

Put forward the pursuit of goals, and, using a variety of scientific and technical principles, to express it as a mathematical relationship.

Value criterion: After proposing the goal to be pursued, establish the "good" or "bad" value criterion of the target, and describe it in some quantity form.

Specific quantification

Seek restrictions that are usually expressed in terms of some inequalities or equations between variables.

The difference from linear programming: the optimal solution of nonlinear programming (if the optimal solution exists) may be achieved at any point in its feasible domain. and linear optimization can only be achieved at the boundary. Specific cases

% Write M file fun1.m define objective function
f=fun1 (x);
F=sum (x.^2) +8;
% Write M file fun2.m define Nonlinear constraint
function [g,h]=fun2 (x);
G=[-x (1) ^2+x (2)-X (3) ^2
x (1) +x (2) ^2+x (3) ^3-20];% Nonlinear inequality constraint
h=[-x (1)-X (2) ^2+2
x (2) +2*x (3) ^2-3]; The nonlinear equality constraint
% writes the main program file example2.m as follows:
options=optimset (' largescale ', ' off ');
[X,y]=fmincon (' Fun1 ', rand (3,1), [],[],[],[],zeros (3,1), [], ' fun2 ', options)

For nonlinear programming model (Np--non-deterministic polynomial^), the optimal solution can be obtained by iterative method.

The basic idea of an iterative approach is to create a point column XK from a selected initial point x0∈rn, according to a particular iteration rule. So that when the XK is a poor point column, the last point is the optimal solution of (NP), and when the XK is an infinite point column, it has the limit point and its limit point is the optimal solution of (NP).

The key to solving (NP) using an iterative method is how to construct the search direction for each round and determine the appropriate step size.

The feasible region of convex programming is convex set, and its local optimal solution is the global optimal solution, and the set of its optimal solution forms a convex set. When the objective function f (x) of convex programming is a strictly convex function, its optimal solution must be unique (assuming that the optimal solution exists). It can be seen that convex programming is a kind of simple and important nonlinear programming with significant theoretical significance.

The discussion here is very superficial, to be continued ^ See 46 pages ^ unconstrained problem one-dimensional search

When using iterative method to find the minimum point of a function, one-dimensional search is often used, that is, to find the minimum point of the objective function along a given direction. The following are common:

Heuristics ("Success-Failure", Fibonacci method, 0.618 method (Fibonacci approximation algorithm, easier to achieve good results))

By using the method of symmetric search, the Fibonacci method shortens the length of the examined interval, and it can evaluate the number of functions as little as possible to reach a predetermined shortening rate.

Interpolation (parabolic interpolation method, three-time interpolation method, etc.)

Root-Finding method in Calculus (tangent method, binary method, etc.) two-time interpolation method

For the minimization problem, when f (t) is continuous on [a,b], a polynomial interpolation can be considered for one-dimensional search. Its basic idea is: in the search interval, we use the lower (usually not more than three times) polynomial to approximate the objective function, and the optimal solution is approximated by the minimum point of the interpolation polynomial.

The solution to the unconstrained extremum problem without being continued

Analytic method

Gradient method (Steepest descent method)--Each round of the search direction is the target function at the current point of the fastest decline in the direction

Case Study: Solving Minf (x) =x21+25x22 with the steepest descent method

where x= (X1,X2) t requires the initial point to be x0= (2,2) t

Solution: ∇f (x) = (2x1,50x2) T

% writes M file detaf.m, defines function f (x) and its gradient column vector as follows
function [F,df]=detaf (x);
F=x (1) ^2+25*x (2) ^2;
Df=[2*x (1)
50*x (2)];
% Write main program file zuisu.m as follows:
CLC
x=[2;2];
[F0,g]=detaf (x);
While Norm (g) >0.000001
p=-g/norm (g);
T=1.0;f=detaf (x+t*p);
While f>f0
T=T/2;
F=detaf (x+t*p);
End
X=x+t*p;
[F0,g]=detaf (x);
End
X,f0

∇ also need to be skilled in its use, to be continued ~

Newton method:. Starting from an initial point, each round from the current iteration point, along the Newton direction and take the step size of 1 solution, called the Newton method.

Case Study: Solving Minf (x) =x41+25x42+x21x22 with Newton method

Select x0= (2,2) T

% write M file nwfun.m as follows:
function [F,df,d2f]=nwfun (x);
F=x (1) ^4+25*x (2) ^4+x (1) ^2*x (2) ^2;
Df=[4*x (1) ^3+2*x (1) *x (2) ^2;100*x (2) ^3+2*x (1) ^2*x (2)];
D2f=[2*x (1) ^2+2*x (2) ^2,4*x (1) *x (2)
4*x (1) *x (2), 300*x (2) ^2+2*x (1) ^2];
% Write main program text EXAMPLE5.M as follows:
CLC
x=[2;2];
[F0,g1,g2]=nwfun (x);
While Norm (G1) >0.00001
P=-inv (G2) *g1;
x=x+p;
[F0,g1,g2]=nwfun (x);
End
X, F0

If the objective function is not two times, generally speaking, it is not guaranteed that the optimal solution can be obtained by using the Newton method through the finite-wheel iteration. In order to improve the accuracy of the calculation, we can calculate the above problem with variable step size in iteration. specifically ^ See page 51 ^

The advantage is that the convergence speed is fast; The disadvantage is that sometimes it doesn't work well and needs to be improved. In addition, when the dimension is higher, the computational workload is very high.

Variable-scale method--variable Metric algorithm

It is not only a very effective algorithm to solve the unconstrained extremum problem, but also has been popularized to solve the constrained extremum problem. Because it avoids the calculation of the second derivative matrix and its inversion process, it is faster than the gradient method, especially for the high-dimensional problem, so the variable scale method obtains a high reputation. specifically ^ See page 51 ^

Direct method

In the unconstrained nonlinear programming method, it is generally necessary to use the direct search method when the objective function of the problem is difficult to be represented by the analytic formula of the Guide function. At the same time, because these methods are generally more intuitive and easy to understand, they are often used in practical applications.

Powell method: Basic Search Accelerated Search Adjust Search

Specific steps ^ See page 54 ^ Matlab to seek unconstrained extremum problem

Symbolic Solution:

% calculation of the MATLAB program as follows
CLC, clear
syms x y
f=x^3-y^3+3*x^2+3*y^2-9*x;
Df=jacobian (f);  % for first-order partial derivative
D2f=jacobian (DF);% Hessian array
[Xx,yy]=solve (DF)  % to find
xx=double (xx); Yy=double (yy)
; For I=1:length (XX)
a=subs (D2f,{x,y},{xx (i), yy (i)});
B=eig (a);  % to find the eigenvalues of the Matrix
f=subs (f,{x,y},{xx (i), yy (i)}); f=double (f);
If all (b>0)
fprintf (' (%f,%f) is a minimum point, corresponding to a minimum of%f\n ', XX (i), yy (i), f);
ElseIf All (b<0)
fprintf (' (%f,%f) is the maximum point, corresponding to the maximum value of%f\n ', XX (i), yy (i), f)
; ElseIf any (b>0) & Any (b<0)
fprintf (' (%f,%f) is not an extreme point \ n ', XX (i), yy (i));
else
fprintf (' Unable to judge (%f,%f) is the extreme point \ n ', XX (i), yy (i));
   End End

Numerical solution: In the Matlab Toolbox, the functions for solving unconstrained extremum problems are fminunc and fminsearch, and the usage is described as follows.

[X,fval]=fminunc (FUN,X0,OPTIONS,P1,P2,)
%x0 is the initial value of vector X, the OPTIONS are tuning parameters, and the default parameters can be used. P1,p2 are some of the parameters that can be passed to fun.

[X,fval,exitflag,output]=fminsearch (FUN,X0,OPTIONS,P1,P2,)

%%fminunc,fminsearch difference
% The former is more used for continuous functions, The latter can be used for discontinuous functions (no reciprocal method used)

The symbolic solution (the exact term is the analytic solution) is the exact solution. In fact, many ordinary differential equations have no analytic solution, so they can only be solved by means of numerical methods.

The solution of function 0 point and Equation Group

% uses the symbolic solution of the following
syms x
x0=solve (x^3-x^2+2*x-3)% to compute the symbolic solution of the function 0 points
X0=VPA (x0,5)  % to decimal format to
obtain all 0 points

% for numerical solution of the MATLAB program as follows
y=@ (x) x^3-x^2+2*x-3;
X=fsolve (y,rand)
% can only find a 0 point near the given initial value

Solution of constrained extremum problem

The extremum problem with constraint condition is called constrained extremum problem, also called programming problem.

Simplified method: The constraint problem into the unconstrained problem to the nonlinear programming problem into a linear programming problem to transform a complex problem into a simpler problem of other methods

Kuhn-Tucker condition--one of the most important theoretical achievements in the field of nonlinear programming, is a necessary condition to determine a point as the most advantageous, but it is generally said that it is not a sufficient condition (for convex programming, it is both the essential condition of the existence and the sufficient condition).

Secondary planning

Definition: If the objective function of a nonlinear programming is a two-time function of the independent variable x, and the constraint condition is all linear, it is called
This plan is planned for two times.

Solving

[X,fval]=quadprog (H,f,a,b,aeq,beq,lb,ub,x0,options)

Penalty function method

By using penalty function method, the solution of nonlinear programming problem can be transformed into a series of unconstrained extremum problems, which is also called the method of sequential unconstrained minimization, and denoted is SUMT (sequential unconstrained minization technique).

Basic idea: Using the constraint function of the problem to make the proper penalty function, the augmented objective function with parameter is constructed, and the problem is transformed into unconstrained nonlinear programming problem.

Divided into internal penalty function method and external penalty function method ^ See 56 pages ^

Case:

function g=test (x);
m=50000;
F=x (1) ^2+x (2) ^2+8;
G=f-m*min (min (x), 0)-m*min (x (1) ^2-x (2), 0) +m* (-X (1)-X (2) ^2+2) ^2;
% Implementation
[X,y]=fminunc (' Test ', rand (2,1))
The problem of solving constrained extremum by Matlab

See page 58 of the Matlab Optimization Toolbox user graphical interface

Optimtool can be applied to solve all the optimization problems, and the results can be exported to the flight management problem in MATLAB work space.

The optimization objective functions in this problem can be in different forms: such as minimizing the maximum adjustment of all aircraft, and minimizing the total amount of adjustment for all aircraft. In this case, the mathematical programming model can be obtained by using the minimum of the sum of the absolute value of the adjustment of all aircraft as the objective function:
min∑i=16| θi|s.t.| Β0ij+12 (θi+ θj) |>α0ij,i,j=1,2,..., 6,i≠j| Θi|≤30∘

Model One code:

 clc,clear x0=[150 145 130 0] y0=[140 0]; q=[243 236 220.5 159 230); xy0=[x0; y0] D0=dist (xy 0);
% calculate the distance between each column vector of the Matrix D0 (Find (d0==0)) =inf; A0=asind (8./d0)% in degrees Xy1=x0+i*y0 xy2=exp (i*q*pi/180) for M=1:6 for N=1:6 if N~=m b0 (m,n) =angle ((Xy2 (n)-xy2 (M))/(
Xy1 (m)-xy1 (n)));
End End B0=b0*180/pi;
Dlmwrite (' Txt1.txt ', A0, ' delimiter ', ' t ', ' newline ', ' PC ');
Fid=fopen (' Txt1.txt ', ' a '); Fwrite (FID, ' ~ ', ' char '); % dlmwrite (' txt1.txt ', B0, ' delimiter ', ' \ t ', ' newline ', ' PC ', '-append ', ' Roffset ', 1) to write lingo data to a plain text file 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.