[Engineering Optimization] optimization algorithms: Newton method, damping Newton method and pure form method, damping pure form

Source: Internet
Author: User

[Engineering Optimization] optimization algorithms: Newton method, damping Newton method and pure form method, damping pure form
Newton's methodUsage conditions: The target function has a second derivative and the heshi matrix is positive.Advantages and disadvantages: Fast convergence, large computing volume, and dependent on the choice of initial points.Basic steps of the algorithm:

Algorithm flowchart:
The damping Newton method is basically the same as the Newton method,Only one-dimensional exact search is added.:
Advantages and disadvantages: Improves local convergence.

Assume that the minimum value of f = (x-1) * (x-1) + y * y is required. m file. Put other function files in the same directory:
1. Script File NTTest. m

Clear allclc syms x yf = (x-1) * (x-1) + y * y; var = [x y]; x0 = [1 1]; eps = 0.000001; disp ('Newton's method: ') minNT (f, x0, var, eps) disp ('damping Newton's method:') minMNT (f, x0, var, eps)

2. minNT. m
Function [x, minf] = minNT (f, x0, var, eps) % target function: f % initial point: x0 % independent variable vector: var % precision: eps % the value of the independent variable when the target function is set to the minimum value: x; % the minimum value of the target function: minf format long; if nargin = 3 eps = 1.0e-6; endtol = 1; syms L % x0 = transpose (x0); while tol> eps % does not meet the accuracy requirements gradf = jacbian (f, var); % Gradient Direction jacf = jacbian (gradf, var ); % yakebi matrix v = Funval (gradf, var, x0); % numerical solution tol = norm (v); % calculated gradient (first-order derivative) pv = Funval (jacf, var, x0); % second-order derivative numerical solution p =-inv (pv) * transpose (v ); % search direction x1 = x0 + P'; % iterative x0 = x1; end x = x1; minf = Funval (f, var, x); format short;

3. minMNT. m
Function [x, minf] = minMNT (f, x0, var, eps) % target function: f % initial point: x0 % independent variable vector: var % precision: eps % the value of the independent variable when the target function is set to the minimum value: x; % the minimum value of the target function: minf format long; if nargin = 3 eps = 1.0e-6; endtol = 1; syms L % x0 = transpose (x0); while tol> eps % does not meet the accuracy requirements gradf = jacbian (f, var); % Gradient Direction jacf = jacbian (gradf, var ); % yakebi matrix v = Funval (gradf, var, x0); % numerical solution tol = norm (v); % calculated gradient (first-order derivative) pv = Funval (jacf, var, x0); % second-order derivative numerical solution p =-inv (pv) * transpose (v ); % search direction % find the best step % y = x0 + L * P'; yf = Funval (f, var, y); [, b] = minJT (yf, 0, 0.1); xm = minHJ (yf, a, B ); % golden split method: one-dimensional search with the optimal step x1 = x0 + xm * P'; % iterative x0 = x1; end x = double (x1 ); minf = double (Funval (f, var, x); format short;

4. minHJ. m

Function [x, minf] = minHJ (f, a, B, eps) % target function: f % extreme value range left endpoint: a % extreme value range right endpoint: B % precision: when the target function is set to the minimum value, the value of the independent variable is: x % the minimum value of the target function: minf format long; if nargin = 3 eps = 1.0e-6; end l = a + 0.382 * (B-a); % Test Point u = a + 0.618 * (B-a); % test point k = 1; tol = B-a; while tol> eps & k <100000 fl = subs (f, findsym (f), l); % test point function value fu = subs (f, findsym (f), u); % test point function value if fl> fu a = 1; % change range left endpoint l = u; u = a + 0.618 * (B-a); % shorten else B = u between search zones; % change range right endpoint u = l; l = a + 0.382 * (B-a); % shorten the end k = k + 1; t Ol = abs (B-a); endif k = 100000 disp ('the minimum value cannot be found! '); X = NaN; minf = NaN; return; endx = (a + B)/2; minf = subs (f, findsym (f), x); format short;

5. minJT. m

Function [minx, maxx] = minJT (f, x0, h0, eps) % target function: f % initial point: x0 % initial step: h0 % precision: eps % the target function obtains the left endpoint of the range containing the Extreme Value: minx % The right endpoint of the range containing the Extreme Value: maxx format longif nargin = 3 eps = 1.0e-6; end x1 = x0; k = 0; h = h0; while 1x4 = x1 + h; % test step k = k + 1; f4 = subs (f, findsym (f), x4); f1 = subs (f, findsym (f), x1); if f4 <f1 x2 = x1; x1 = x4; f2 = f1; f1 = f4; h = 2 * h; % increase step else if k = 1 h =-h; % direction search x2 = x4; f2 = f4; else x3 = x2; x2 = x1; x1 = x4; break; end endend minx = min (x1, x3); maxx = x1 + x3-minx; format short;


6. Funval. m
function fv=Funval(f,varvec,varval)var=findsym(f);varc=findsym(varvec);s1=length(var);s2=length(varc);m=floor((s1-1)/3+1);varv=zeros(1,m);if s1~=s2    for i=0:((s1-1)/3)        k=findstr(varc,var(3*i+1));        index=(k-1)/3;        varv(i+1)=varval(index+1);    end    fv=subs(f,var,varv);else    fv=subs(f,varvec,varval);end

The running result is as follows:

The theory of the simple form method is still a little complicated. This article focuses on the basic implementation of the algorithm. Therefore, the theoretical part is omitted. For details, refer to relevant information on the Internet. The specific implementation is given below:
The following describes the specific examples:

The specific Matlab implementation is as follows:

1. script file:
clear allclc% A=[2 2 1 0 0 0%    1 2 0 1 0 0%    4 0 0 0 1 0%    0 4 0 0 0 1];% c=[-2 -3 0 0 0 0];% b=[12 8 16 12]';% baseVector=[3 4 5 6]; A=[1 1 -2 1 0 0   2 -1 4 0 1 0   -1 2 -4 0 0 1];c=[1 -2 1 0 0 0];b=[12 8 4]';baseVector=[4 5 6]; [x y]=ModifSimpleMthd(A,c,b,baseVector)

2. ModifSimpleMthd. m file
Function [x, minf] = ModifSimpleMthd (A, c, B, baseVector) % constraint matrix: A % target function coefficient vector: c % constraint right end vector: B % initial base vector: baseVector % value of the independent variable when the target function is set to the minimum value: x % minimum value of the target function: minf sz = size (A); limit A = sz (2); n = sz (1 ); xx = 1: required a; nobase = zeros (); m = 1; if c> = 0 vr = find (c ~ =, 'Last'); rgv = inv (A (:, (Clerk A-n + 1): Clerk A) * B; if rgv> = 0 x = zeros (1, vr); minf = 0; else disp ('no optimizer exists '); x = NaN; minf = NaN; return; endend for I = 1: returns a % to obtain the non-base variable subscript if (isempty (find (baseVector = xx (I), 1) nobase (m) = I; m = m + 1; else; endend bCon = 1; M = 0; B = A (:, baseVector); invB = inv (B ); while bCon nB = A (:, nobase); % non-base variable matrix ncb = c (nobase); % non-base variable coefficient B = A (:, baseVector ); % base variable matrix cb = c (baseVector); % base variable coefficient xb = invB * B; f = cb * xb; w = Cb * invB; for I = 1: length (nobase) % discriminant sigma (I) = w * nB (:, I)-ncb (I); end [maxs, ind] = max (sigma); % ind is the base variable subscript if maxs <= 0% the maximum value is less than zero, and the output solution is optimal minf = cb * xb; vr = find (c ~ = 0, 1, 'La'); for l = 1: vr ele = find (baseVector = l, 1); if (isempty (ele) x (l) = 0; else x (l) = xb (ele); end bCon = 0; else y = inv (B) * A (:, nobase (ind )); if y <= 0% there is no optimal solution disp ('There is no optimal solution! '); X = NaN; minf = NaN; return; else minb = inf; chagB = 0; for j = 1: length (y) if y (j)> 0 bz = xb (j)/y (j); if bz <minb = bz; chagB = j; end % chagB as the base variable subscript tmp = baseVector (chagB ); % update the base matrix and non-base matrix baseVector (chagB) = nobase (ind); nobase (ind) = tmp; for j = 1: inverse Matrix Transformation of chagB-1 % base variable matrix if y (j )~ = 0 invB (j, :) = invB (j, :)-invB (chagB, :) * y (j)/y (chagB); end for j = chagB + 1: length (y) if y (j )~ = 0 invB (j, :) = invB (j, :)-invB (chagB, :) * y (j)/y (chagB); end invB (chagB, :) = invB (chagB, :)/y (chagB); end M = M + 1; if (M = 1000000) % limit disp ('the optimal solution cannot be found! '); X = NaN; minf = NaN; return; endend

The running result is as follows:



For more optimization algorithm implementations, visitBytes.
Original article: http://blog.csdn.net/tengweitw/article/details/43669185

Author: nineheadedbird

















Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.