Modeling Algorithm (III.)--Nonlinear programming

Source: Internet
Author: User

One, the difference between nonlinear programming and linear programming

1. Non-linear target function or constraint condition

2. If the optimal solution exists, the linear programming can only be found on the boundary of the feasible domain (usually at the vertex), and the optimal solution of the nonlinear programming may exist at any point in the feasible domain.

Second, the MATLAB solution of Nonlinear Programming 1, the mathematical model of nonlinear programming in MATLAB is:

where f (x) is a scalar function, A,B,AEQ,BEQ is the corresponding dimension of the matrix and vector, C (x), Ceq (x) is a nonlinear vector function.

Then we use an example to deepen the impression

MATLAB implementation:

function f=fun1 (x)       % defines the objective function f=sum (x.^2) +8;
 function [G,h]=fun2 (x)% nonlinear constraint g  =[-x (1 ) ^2  +x (2 )-X ( 3 ) ^2   X ( 1 ) +x (2 ) ^2  +x ( 3 ) ^3 -20  ];h  =[-x (1 )-X (2 ) ^ 2  +2   X (  2 ) +2  *x ( 3 ) ^2 -3 ]; 
Options = optimset ('largescale','off'); [ x, Y]=fmincon ('fun1', rand (3,1), [],[],[],[],zeros (3 ,1), [],'fun2', options)  % The initial value is a random number.
2. Solving the basic iterative format of linear programming

(1) This piece is mainly some concepts, understand these concepts, can continue to understand the following ideas, have to see, do not feel bored, want to learn subtraction, we must set ' + ' is to add this rule, so we have to understand these concepts.

(2) for NP problem (nonlinear programming), an iterative method can be used to find the optimal solution. The basic idea is:

Starting from a selected initial point, a point column is produced according to a particular iteration rule, and the last one is the optimal solution of NP when there is a poor column at that time, and the infinite point is the limit, and the limit point is the optimal solution of NP.

(3) General steps for solving NP problems

Before listing the steps, we need to understand a concept (to determine the best direction to search)

The general steps are:

A, select the initial point, so that k:=0

b, constructs the search direction, according to certain plan, constructs F at the point about K's feasible descent direction as the search direction.

C, find the next iteration point, according to the iteration format to find out

If a certain termination condition is met, the iteration is stopped.

D, in lieu, continue iteration.

(4) Convex function, convex programming

The characteristic of this plan is that his local optimal solution is the global optimal solution, which is a great feature, which shows that this kind of NP problem is easy to solve.

(ii) Unconstrained problem one or one-dimensional search method

For example, one-dimensional minimization problem, if f (t) is the lower single-peak function on the [a, b] interval, the approximate optimal solution is obtained by shortening the length of [a, b] continuously.

is to find about this interval symmetrical 2 points, and then compare the size of the two points, then the t* will certainly be large over there to retract, the construction of a smaller interval to solve, so that the final limit value, you can get the best solution.

1. Fibonacci Sequence Method

This method is used to determine how the step size is achieved by using Fibonacci fractions to characterize the difference between each interval.

Then, after a series of explorations, so that the distance between the final exploration point and the optimal solution is not more than the precision, that is, the final interval length can not exceed this, if so, we can through the precision, in turn to determine the number of times to explore N, N stop, the final is the optimal solution.

The following is the overall idea of the algorithm (programming ideas)

2, 0.618 Method (Golden section method)

Just change the ratio to 0.618, this way, the programming is more simple, as long as the third part of the Fibonacci ratio is changed to 0.618.

3, two interpolation method (temporarily omitted) 4, unconstrained extremum problem solution

(1) The General format is:

(2) Analytic Method--gradient method

For the basic iterative format, the first thing we want to determine is the direction of the search, then by the knowledge of calculus, along the direction of the negative gradient is F descent the fastest direction, so we as we thought the direction of the search.

The feature of this approach is that each search direction is the fastest descending direction, so our stopping conditions are:

The steps are as follows:

An example of this in the Act

MATLAB implementation

 function [F,df]=DETAF (x) F  =x (1 ) ^< Span style= "COLOR: #800080" >2  +25  *x (2 ) ^2  ;d f  =[    2  *x (1  )  50  *x (2 )]; 
x=[2;  2]; [F0,g] =Detaf (x);  while Norm (g) >1e-6    p=-g/norm (g);    T=1.0;    F=detaf (x+t*p);      while f>f0        t=t/2;        F=detaf (x+t*p);    End    x=x+t*p;    [F0,g] =Detaf (x); endx,f0

The last extremum tends to be nearly 0, almost = =.

(3) Analytic Method--Newton method

In fact, it is to use two-time expansion approximation to determine the direction of a search. As for the middle calculation (hehe--)

Then general steps (programming Ideas)

Then give an example

We can get it by calculating

Then use MATLAB programming solution (in fact, with C can also: )

function [f,df,d2f]=Nwfun (x) F=x (1)^4+ -*x (2)^4+x (1)^2*x (2)^2;d F=[4*x (1)^3+2*x (1) *x (2)^2     -*x (2)^3+2*x (1)^2*x (2)];d 2f=[ A*x (1)^2+2*x (2)^2,4*x (1) *x (2)    4*x (1) *x (2), -*x (2)^2+2*x (1)^2];
x=[2;  2]; [F0,G1,G2] =Nwfun (x);  while Norm (G1) >0.00001    P=-inv (G2) *G1;    x=x+p;    [F0,G1,G2] =Nwfun (x); endx,f0

Then, if the objective function is not a two-time function, the Newton method is generally not guaranteed to obtain the optimal solution.

In order to improve the accuracy of calculation, we can still use the variable step size method when we iterate.

x=[2;  2]; [F0,G1,G2] =Nwfun (x);  while Norm (G1) >0.00001    P=-inv (G2) *G1;    P=p/norm (p);    T=1.0;    F=nwfun (x+t*p);      while f>f0       t=t/2;       F=nwfun (x+t*p)    endx=x+t*p;[ F0,G1,G2]=nwfun (x); endx,f0

(3) Analytic Method--Variable scale method

This is a solution to solve the problem that the Newton method is too time-consuming to find the inverse matrix. Push to let's just skip it.

Write the general steps directly

(4) Method of direct method--powell

5, the non-binding problem of the Matlab method. (= =, as long as I don't write it) 1, the MATLAB format of unconstrained problem

(1) Fminunc command

Examples are as follows

Matlab call to solve problems

(2) Fminsearch command

(iii) Constrained extremum problem secondary planning

1, definition: The objective function is x two times function, and the constraints are linear.

2. General mathematical model

3, Matlab's solution function

Second, external penalty function method

Examples:

Solving

Third, Matlab to find constrained extremum problem 1, fminbnd function

2. Fseminf function

3. Fseminf function

Modeling Algorithm (III.)--Nonlinear programming

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.