First,BFGSIntroduction to Algorithms BFGSalgorithm is a kind of quasi-Newton method which is used byBroyden,Fletcher,Goldfarb,Shannofour of them were raised separately, so calledBFGScorrection. withDFPthe deduction formula for the correction,DFPthe correction is shown in the blog "Optimization algorithm-the DFP algorithm of quasi-Newton method". For the quasi-Newton equation:
can simplify to:
order, you may be able to:
in theBFGSin the correction method, assume that:
Second,BFGSdeduction of Correction formulavector of all of them. ,. then, the quasi-Newton equation can be simplified to:
will be put into the formula:
will be put into the formula:
known: as a real number, as a vector. In the equation , there are many possibilities for parameters and solutions, and we take a special case, assuming that. The
on-behalf:
order, then:
The final BFGS correction formula is:
Third, The algorithm flow of BFGS correction The symmetric positive definite is set by the aboveBFGSthe necessary and sufficient condition of the symmetric positive definite is that the correction formula is determined. in the blog " Optimization Algorithm-Newton (Newton method)" described in the non-precise line search criteria:Armijo search criteria, The purpose of the search criteria is to help us determine the learning rate, as well as other guidelines, such as the Wolfe guidelines and the exact line search. When using the Armijo search criteria, the necessary and sufficient conditions are not met, and the BFGS correction formula can be changed slightly at this time:
BFGS algorithm flow of Quasi-Newton method:
Iv. Solving specific optimization problemsSolving unconstrained optimization problems
among them,. pythonProgram implementation:
- function.py
#coding: UTF-8 "Created on May 19, 2015 @author:zhaozhiyong" from numpy Import * #fundef fun (x): return * (x[0,0] * * 2-x[1,0]) * * 2 + (x[0,0]-1) * * 2#gfundef gfun (x): result = Zeros ((2, 1)) result[0, 0] = x * x[0,0] * (x[0, 0] * * 2-x[1,0]) + 2 * (x[0,0]-1) result[1, 0] = -200 * (x[0,0] * * 2-x[1,0]) return result
- bfgs.py
#coding: utf-8from numpy import *from function Import *def bfgs (fun, Gfun, x0): result = [] maxk =- Rho = 0. sigma = 0.4 m = shape (x0) [0] Bk = Eye (m) k = 0 while (K < MAXK): GK = Mat (Gfun (x0)) #计算梯度
DK = Mat (-linalg.solve (Bk, gk)) m = 0 mk = 0 while (M < a): Newf = fun (x0 + rho * * M * dk) o LDF = Fun (x0) if (Newf < Oldf + Sigma * (RHO * m) * (Gk. T * DK) [0,0]): mk = m break m = m + 1 #BFGS校正 x = x0 + Rho * * mk * DK sk = x-x0 YK = Gfun (x)-GK if (YK. T * SK > 0): BK = BK-(BK * SK * SK). T * Bk)/(SK. T * Bk * SK) + (YK * yk. T)/(YK. T * sk) k = k + 1 x0 = x result.append (Fun (x0)) return result
- testbfgs.py
#coding: UTF-8 "Created on May 19, 2015 @author:zhaozhiyong" from BFGS import *import matplotlib.pyplot as plt x0 = ma t ([[[ -1.2], [1]]) result = BFGS (fun, Gfun, x0) n = len (result) ax = Plt.figure (). Add_subplot (111) x = arange (0, N, 1) y = result Ax.plot (x, y) plt.show ()
v. Results of the experiment
Optimization algorithm--BFGS algorithm of quasi-Newton method