The implementation of the steepest descent method and the Python realization of Newton method and the realization of MATLAB __python

Source: Internet
Author: User

algorithm Source: "Numerical optimization method ~ Gao Li"

algorithm Objective: To achieve the local optimization of function search, with two-yuan function as an example, shows the steepest descent method and Newton's optimization algorithm process

Main Python module: numpy,sympy

(1) Python implementation

(2) MATLAB realization

(3) Comparison

(1) Python implementation

A Steepest Descent method

#-*-Coding:utf-8-*-"" "Created on Sat Oct 15:01:54 2016 @author: Zhangweiguo" "" Import sympy,numpy import Math im Port Matplotlib.pyplot as PL from Mpl_toolkits.mplot3d import axes3d as Ax3 #最速下降法, two-dimensional experimental def SD (x0,g,b,c,n,e): F = lambda x:0.5 * (Numpy.dot (Numpy.dot (x.t, G), X)) + Numpy.dot (b.t, x) + c f_d = Lambda X:numpy.dot (g, X) +b x=x0; Y=[];
	Y_d=[]; Xx=sympy.symarray (' xx ', (2,1)) n = 1 ee = f_d (x0) e= (ee[0]**2+ee[1]**2) **0.5 y.append (f (x0) [0,0]); Y_d.append (e) a=sympy. Symbol (' A ', real=true) print '%d iteration: e=%d '% (n, e) while N<n and E>e:n=n+1 yy=f (x0-a*f_d (x0)) Yy_d=sympy.di FF (yy[0,0],a,1) a0=sympy.solve (yy_d) x0=x0-a0*f_d (x0) x=numpy.c_[x,x0] Y.append (f (x0) [0,0]) ee = f_d (x0) e = m Ath.pow (Math.pow (ee[0,0],2) +math.pow (ee[1,0],2), 0.5) y_d.append (e) print '%d iteration: e=%s '% (n,e) return x,y,y_d if __name __== ' __main__ ': G=numpy.array ([[[21.0,4.0],[4.0,15.0]]) B=numpy.array ([[[2.0],[3.0]]) c=10.0 f = Lambda x:0.5 * (numpy. Dot (Numpy.dot (x.t, G), x)) + Numpy.dot (b.t, x) + c f_d = Lambda X:numpy.dot (G, x) + b X0=numpy.array ([[[[ -30.0],[100.0]]) n=40 e=10** ( -6) x,
	Y, Y_D=SD (x0,g,b,c,n,e) X=numpy.array (X) y=numpy.array (y) Y_d=numpy.array (y_d) figure1=pl.figure (' trend ') N=len (y) X=numpy.arange (1,n+1) pl.subplot (2,1,1) pl.semilogy (x,y, ' r* ') Pl.xlabel (' n ') Pl.ylabel (' F (x) ') pl.subplot (2,1,2) Pl.
	Semilogy (x,y_d, ' g* ') Pl.xlabel (' n ') pl.ylabel ("|f ' (x) |") Figure2=pl.figure (' Behave ') x=numpy.arange ( -110,110,1) y=x [Xx,yy]=numpy.meshgrid (X,y) Zz=numpy.zeros (xx.shape) n= Xx.shape[0] for I in Xrange (n): for J in Xrange (n): Xxx=numpy.array ([xx[i,j],yy[i,j]]) zz[i,j]=f (XXX. T) ax=ax3 (figure2) Ax.contour3d (xx,yy,zz) Ax.plot3d (x[0,:],x[1,:],y, ' ro--') pl.show ()

Results:

1th Iteration: e=1401
2nd Iteration: e=285.423850209
3rd Iteration: e=36.4480186834
4th Iteration: e=7.42196747556
5th Iteration: e=0.947769462918
6th iteration: e=0.192995789132
7th Iteration: e=0.0246451518433
8th Iteration: e=0.00501853110317
9th Iteration: e=0.000640855749362
10th Iteration: e=0.000130498466038
11th Iteration: e=1.66643765921e-05
12th Iteration: e=3.39339326369e-06
13th Iteration: e=4.33329103766e-07




B Newton method

#-*-Coding:utf-8-*-"" "Created on Sat Oct 15:01:54 2016 @author: Zhangweiguo" "" Import numpy import math import m Atplotlib.pyplot as PL from Mpl_toolkits.mplot3d import axes3d as Ax3 #Newton寻优, two-dimensional experiment def NE (x0,y0,n,e): x1=[]; X2=[]; Y=[]; y_d=[] n = 1 ee = g (x0,y0) e= (ee[0,0]**2+ee[1,0]**2) **0.5 x1.append (x0) x2.append (y0) y.append (f (x0,y0)) y_d.append
		(e) print '%d iteration: e=%s '% (n, e) while N<n and E>e:n=n+1 D=-numpy.dot (NUMPY.LINALG.INV (g (x0,y0)), G (X0,Y0)) #d =-numpy.dot (NUMPY.LINALG.PINV (g (x0,y0)), G (X0,Y0)) x0=x0+d[0,0] y0=y0+d[1,0] ee = G (x0, y0) e = (ee[0,0) * * 2 + E e[1,0] * * * 2) * * 0.5 x1.append (x0) x2.append (y0) y.append (f (x0, y0)) Y_d.append (e) print '%d iteration: e=%s '% (n,e) ret Urn x1,x2,y,y_d if __name__== ' __main__ ': f = lambda x,y:3*x**2+3*y**2-x**2*y #原函数 g = Lambda x,y:nu Mpy.array ([[6*x-2*x*y],[6*y-x**2]]) #一阶导函数向量 G = Lambda X,y:numpy.array ([[6-2*y,-2*x],[-2*x,6]]) #二阶导函数矩阵 x0=-2;y0 =4 n=10; e=10** ( -6) X1,X2,y,y_d=ne (x0,y0,n,e) X1=numpy.array (X1) X2=numpy.array (X2) Y=numpy.array (Y) Y_d=numpy.array (Y_d) figure1= Pl.figure (' trend ') N=len (Y) x=numpy.arange (1,n+1) pl.subplot (2,1,1) pl.semilogy (x,y, ' r* ') Pl.xlabel (' n ') Pl.ylabel ('
	F (x) ') Pl.subplot (2,1,2) pl.semilogy (x,y_d, ' g* ') Pl.xlabel (' n ') pl.ylabel ("|f ' (x) |") Figure2=pl.figure (' Behave ') x=numpy.arange ( -6,6,0.1) y=x [Xx,yy]=numpy.meshgrid (X,y) Zz=numpy.zeros (xx.shape) n= Xx.shape[0] for I in Xrange (n): for J in Xrange (n): Zz[i,j]=f (XX[I,J],YY[I,J)) ax=ax3 (Figure2) Ax.contour3d (Xx,yy, zz,15) Ax.plot3d (x1,x2,y, ' ro--') pl.show ()


(2) MATLAB realization

A Steepest Descent method

To create a call function, input different coefficients matrix G and different initial iteration points will have different performance

You can call again

function [X Y y_d]=sd (x0)
%%% steepest descent method
%%% using accurate search
tic
g=[21 4;4];
%g=[21 4;4 1];
B=[2 3] ';
c=10;
%X=[X1;X2]
f=@ (x) 0.5* (x ' *g*x) +b ' *x+c;% The objective function
f_d=@ (x) g*x+b;% the first-order derivative
% to set the termination condition
n=100; e=10^ ( -6);

N=1;
E=norm (ABS (F_d (x0));
x=x0; Y=f (x0); y_d=e;
Syms a real while
n<n && e>e
    n=n+1;
    F1=f (X0-a*f_d (x0));
    A0=solve (diff (F1, ' a ', 1));
    A0=double (a0);
    X0=x0-a0*f_d (x0);
    X (:, n) =x0; Y (n) =f (x0);
    E=norm (ABS (F_d (x0));
    Y_d (n) =e;
End
TOC
figure (1)
subplot (2,1,1)
semilogy (y, ' r* ')
title ([' Function optimal value: ' Num2str (Y (n))]]
Ylabel (' F (x) ')
xlabel (' Iteration number ')
subplot (2,1,2)
semilogy (y_d, ' g< ')
ylabel (' F ' (x) ')
Xlabel (' Iteration number ')
title ([' Guide function Optimal value: ' Num2str (Y_d (n))])
figure (2)
x1=-110:1:110;y1=x1;
[X1 Y1]=meshgrid (x1,y1);
Nn=length (x1);
Z1=zeros (NN,NN);
For I=1:nn for
    j=1:nn
        Z1 (i,j) =f ([X1 (I,J); Y1 (I,J)]);
    End
-
hold on
contour (X1,Y1,Z1)
plot (x (1,:), x (2,:), ' O ', ' linewidth ', 1)
end

Call:

X0 is the initial point and can be replaced

X0=[-30;100];
[X Y y_d]=sd (x0);


Results:




B Newton method


function [X Y y_d]=nt (x0)
%%% fixed Step is 1,newton method
%%% Set termination condition
tic
n=100; e=10^ ( -6);
%%%f is the original function, G is the derivative function, and g is the second derivatives
f=@ (x) 3*x (1). ^2+3*x (2). ^2-x (1). ^2*x (2);
g=@ (x) [6*x (1) -2*x (1) *x (2); 6*x (2)-X (1) ^2];
g=@ (x) [6-2*x (2), -2*x (1); -2*x (1), 6];
E=norm (g (x0));
x=x0; Y=f (x0); y_d=e;
N=1;
While n<100 && e>e
    n=n+1;
    D=-INV (G (x0)) *g (x0);
    %D=-PINV (G (x0)) *g (x0);
    X0=x0+d;
    X (:, n) =x0; Y (n) =f (x0);
    E=norm (g (x0));
    Y_d (n) =e;
End
TOC
figure (1)
subplot (2,1,1)
semilogy (y, ' r* ')
title ([' Function optimal value: ' Num2str (Y (n))]]
Ylabel (' F (x) ')
xlabel (' Iteration number ')
subplot (2,1,2)
semilogy (y_d, ' g< ')
ylabel (' F ' (x) ')
Xlabel (' Iteration number ')
title ([' Guide function Optimal value: ' Num2str (Y_d (n))])
figure (2)
x1=-6:0.05:6;y1=x1;
[X1 Y1]=meshgrid (x1,y1);
Nn=length (x1);
Z1=zeros (NN,NN);
For I=1:nn for
    j=1:nn
        Z1 (i,j) =f ([X1 (I,J); Y1 (I,J)]);
    End
-
hold on
contour (x1,y1,z1,20)
plot (x (1,:), x (2,:), ' go-', ' linewidth ', 1)
end

%x0=[1.5;1.5];
%X0=[-2;4];
X0=[0;3];
%X0=[0;3] When the matrix is singular and the
% matrix G is morbid or singular, the PINV can appropriately improve the result and, worse, adopt the Regularization method
[X Y y_d]=nt (x0);
Disp (' [1.5;1.5] optimal value: ')
disp (min (Y))


Results:



(3) Comparison

The same algorithm, in different environments, because of the different truncation parameters, there will be some differences, matlab easier to use, Python is more rigorous, but python in the symbolic operation of a lot of inconvenience, the difference between the two, can only say that each has a short, can be combined with the effect will be better.



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.