Optimization algorithm of "engineering optimization"--Newton Method, resistance damped method and Simplex method

Source: Internet
Author: User

Newton's Method Conditions of Use:The objective function has a second derivative, and the sea-slug matrix is definite. Pros and cons:fast convergence speed, large computational capacity, and very dependent on the initial point selection. The basic steps of the algorithm:

Algorithm Flowchart:
The Resistance damped method is basically the same as Newton's method, just added a one-dimensional exact search.
Pros and cons: Improved local convergence.

we assume that the minimum value of f= (x-1) * (x-1) +y*y is required, the specific algorithm is implemented as follows, only need to run the nttest.m file, and other function files in the same directory:
1. script File nttest.m
Clear ALLCLC syms x yf= (x-1) * (x-1) +y*y;var=[x y];x0=[1 1];eps=0.000001; Disp (' Newton method: ') minnt (f,x0,var,eps) disp (' Drag damped method: ') minmnt (f,x0,var,eps)

2, MINNT.M
function [X,minf]=minnt (f,x0,var,eps)% target functions: f% initial point: x0% argument vector: var% precision: The value of the argument when the eps% target function takes the minimum value: x;% The minimum value of the target function: Minf format long; If nargin==3    eps=1.0e-6;endtol=1;syms l% x0=transpose (x0), while tol>eps% does not meet precision              requirements Gradf=jacobian (F,var);      % Gradient Direction    Jacf=jacobian (Gradf,var);   The% Jacobian matrix    v=funval (gradf,var,x0); The numerical solution of the% gradient    tol=norm (v);% calculates the size of the gradient (i.e., the first-order guide)    pv=funval (jacf,var,x0); The numerical solution of the second derivative    P=-INV (PV) *transpose (v);    % Search Direction    x1=x0+p ';% is iterated    x0=x1;end x=x1;minf=funval (f,var,x); format short;

3, MINMNT.M
function [X,minf]=minmnt (f,x0,var,eps)% target functions: f% initial point: x0% argument vector: var% precision: The value of the argument when the eps% target function takes the minimum value: x;% The minimum value of the target function: Minf format long If nargin==3    eps=1.0e-6;endtol=1;syms l% x0=transpose (x0), while tol>eps% does not meet accuracy requirements              Gradf=jacobian (f,var);      % Gradient Direction    Jacf=jacobian (Gradf,var);   The% Jacobian matrix    v=funval (gradf,var,x0); The numerical solution of the% gradient    tol=norm (v);% calculates the size of the gradient (i.e., the first-order guide)    pv=funval (jacf,var,x0); The numerical solution of the second derivative    P=-INV (PV) *transpose (v);    % Search Direction    %%%% Find the best step%%%    y=x0+l*p ';    Yf=funval (f,var,y);    [A,b]=minjt (yf,0,0.1);    XM=MINHJ (yf,a,b);           % Golden Section method for one-dimensional search best step    x1=x0+xm*p ';% for iteration    x0=x1; end x=double (x1); Minf=double (Funval (f,var,x)); format short;

4, MINHJ.M

function [X,minf]=minhj (f,a,b,eps)% target functions: f% extreme interval Left end: a% Extreme Interval Right endpoint: b% precision: eps% The value of the argument when the minimum value of the target function is taken: the minimum value of the X-target function: Minf format long ; if nargin==3    eps=1.0e-6;end l=a+0.382* (b-a);            % Test point u=a+0.618* (b-a);            % test point k=1;tol=b-a; While tol>eps&&k<100000    fl=subs (F,findsym (f), L);        % heuristic point function value    fu=subs (F,findsym (f), u);        % heuristic Point function value    if Fl>fu        a=1;                        % change interval left endpoint        l=u;        u=a+0.618* (b-a);            % shorter search interval    else        b=u;                        % change interval right endpoint        u=l;        l=a+0.382* (b-a);             % Shorten search interval    end    k=k+1;    Tol=abs (b-a); endif k==100000    disp (' minimum value not found! ');    X=nan;    Minf=nan;    return;endx= (a+b)/2;minf=subs (F,findsym (f), x); format short;

5, MINJT.M

function [MINX,MAXX]=MINJT (f,x0,h0,eps)% target functions: f% initial point: x0% initial step: h0% accuracy: eps% objective function takes the interval left end of the extremum: minx% objective function takes the interval right end of the Extremum: Maxx Format longif nargin==3    eps=1.0e-6;end x1=x0;k=0;h=h0;while 1    x4=x1+h;        % heuristic step    k=k+1;    F4=subs (F,findsym (f), X4);    F1=subs (F,findsym (f), X1);    If F4<f1        x2=x1;        x1=x4;        F2=F1;        F1=F4;        H=2*h;      % increased step size    else        if k==1            h=-h;   % direction search            x2=x4;            F2=F4;        else            x3=x2;            x2=x1;            x1=x4;            break;        End    EndEnd minx=min (x1,x3); Maxx=x1+x3-minx;format Short;


6, FUNVAL.M
function Fv=funval (f,varvec,varval) Var=findsym (f); Varc=findsym (Varvec); S1=length (Var); S2=length (VARC); M=floor (( s1-1)/3+1) Varv=zeros (1,m); if s1~=s2 for    i=0: ((s1-1)/3) K=findstr (Varc,var (3*i+1)        );        index= (k-1)/3;        Varv (i+1) =varval (index+1);    End    Fv=subs (F,VAR,VARV); else    fv=subs (f,varvec,varval); end

running results such as:

The theory of simplex method is still a bit complicated, and this paper mainly aims at the basic realization of the algorithm, therefore, the theoretical part of this is skipped, the details can refer to the relevant information on the Internet. Here's a concrete implementation:
we illustrate with specific examples:

the specific MATLAB implementation is as follows:

1. script file:
Clear allclc% a=[2 2 1 0 0 0%    1 2 0 1 0 0%    4 0 0 0 1 0%    0 4 0 0 0 1];% c=[-2-3 0 0 0 0];% b=[12 8 [] ';% b Asevector=[3 4 5 6]; A=[1 1-2 1 0 0   2-1 4 0 1 0   -1 2-4 0 0 1];c=[1-2 1 0 0 0];b=[12 8 4] '; basevector=[4 5 6]; [x Y]=MODIFSIMPLEMTHD (A,c,b,basevector)

2. modifsimplemthd.m file
function [X,minf]=modifsimplemthd (a,c,b,basevector)% constraint matrix: a% objective function coefficient vector: c% constrained right-side vector: b% initial base vector: basevector% The value of the argument when the target function takes the minimum value: the minimum value of the X-target function: Minf sz=size (A); Nvia=sz (2); N=sz (1); Xx=1:nvia;nobase=zeros (n.); m=1;    If C>=0 vr=find (c~=0,1, ' last ');    RGV=INV (A (:, (nvia-n+1): Nvia)) *b;        If Rgv>=0 X=zeros (1,VR);    minf=0;        Else disp (' No optimal solution exists ');        X=nan;        Minf=nan;    Return        EndEnd for I=1:nvia% get non-base variable subscript if (IsEmpty (Find (Basevector==xx (i), 1)) Nobase (m) =i;    m=m+1;    else; EndEnd bcon=1; m=0; B=a (:, Basevector); Invb=inv (B);         While BCon nb=a (:, nobase);          % non-base variable matrix ncb=c (nobase);      % non-base variable coefficient b=a (:, Basevector);       % base variable matrix cb=c (Basevector);    % base variable coefficient xb=invb*b;    F=CB*XB;        W=CB*INVB;    For I=1:length (nobase)% discriminant Sigma (i) =W*NB (:, i)-NCB (i);  End [Maxs,ind]=max (Sigma);        %ind for the input variable subscript if maxs<=0% of the maximum value is less than 0, the output solution is optimal minf=cb*xb;        Vr=find (c~=0,1, ' last '); For L=1:VR           Ele=find (basevector==l,1);            if (IsEmpty (Ele)) x (L) = 0;            else X (L) =XB (ele);    End End bcon=0;        else Y=inv (B) *a (:, Nobase (Ind)); If y<=0% there is no optimal solution disp (' There is no optimal solution!            ');            X=nan;            Minf=nan;        Return            else Minb=inf;            chagb=0;                    For J=1:length (y) if Y (j) >0 Bz=xb (j)/y (j);                        If Bz<minb minb=bz;                    Chagb=j;  End End End%CHAGB is the base variable subscript tmp=basevector (CHAGB);            % Update base matrix and non-base matrix Basevector (CHAGB) =nobase (IND);                        Nobase (Ind) =tmp; For j=1:chagb-1 the inverse matrix transformation of the base variable matrix if Y (j) ~=0 Invb (J,:) =invb (J,:)-invb (CHAGB,:) *y (j)/y                (CHAGB); End End for j=chagb+1:length (y) if Y (j) ~=0 Invb (J,:) =invb (J,:)-invb (CHAGB,:) *y (j)/y (CHAGB);                    End End INVB (CHAGB,:) =invb (CHAGB,:)/y (CHAGB);    End End M=m+1; if (m==1000000)% of the iteration limit disp (' can't find the optimal solution!        ');        X=nan;        Minf=nan;    Return EndEnd

running results such as:



For more algorithmic implementations of optimization, please visithttp://download.csdn.net/detail/tengweitw/8434549, which has an index description of each algorithm, and of course includes the above algorithm.
Original: http://blog.csdn.net/tengweitw/article/details/43669185

Nineheadedbird

















Optimization algorithm of "engineering optimization"--Newton Method, resistance damped method and Simplex method

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.