1. Summary of Powell's optimization methodPowell Method, also known as the direction acceleration method, was proposed by Powell in 1964 and is a search method which can accelerate the convergence speed by using the conjugate direction. The method does not need to take the derivative of the objective function and can be applied when the derivative of the objective function is discontinuous, so the Powell algorithm is a very effective direct search method. The Powell method can be used to solve the general unconstrained optimization problem, and the method can obtain more satisfactory results for the optimization of the objective function of dimension n<20. Unlike other direct methods, the Powell method has a complete theoretical system, so its computational efficiency is higher than other direct methods. This method uses a one-dimensional search instead of a jumping probing step. At the same time, the search direction of Powell method is not necessarily descending direction. 2. The concept of conjugate direction and the properties of conjugate vectors2.1 Conjugate Directionset A is n-order real symmetric positive definite matrix, if there are two n-dimensional vector S1 and S2 can satisfy s1*as2=0. The vector S1 and S2 are conjugated to the matrix A, and the direction of the conjugate vector is called the conjugate direction. 2.2 Properties of conjugate vectorsA is a nxn order symmetric positive definite matrix, Si (i=1,2,3,..., N) is about n conjugate non-zero vector of a, and for the minimization optimization problem of positive definite two function f (x), starting from any initial point, The n-dimensional search can converge to a minimum of X*=x (n) in the SI direction. the minimum point of the objective function can be achieved by N-one-dimensional search in n conjugate direction of n-element two-time positive definite function. 3. Basic idea of Primitive Powell Law and its drawbacks3.1 Basic idea and calculation steps of primitive Powell methodCalculation steps:STEP1: Select initial data. Select the initial point x0,n a linear independent search direction d0,d1,... dn-1, given the allowable error err, so that k>0;STEP2: Perform a basic search. Make y0=x1, followed by a one-dimensional search along the d0,d1,..., dn-1. To everything j=1,2,..., n f (Y (j-1) +λ (j-1) d (j-1)) = min F (Y (j-1) +λd (j-1)),y (j) = y (j-1) +λ (j-1) d (j-1); STEP3: Perform an accelerated search. Take acceleration direction d (n) =y (n)-y (0); D (n) | | <err, iterative termination, get Y (n) as the approximate optimal solution of the problem; otherwise, point Y (n) starts with a one-dimensional search along D (n), and the λ (n) is obtained so that f (Y (n) +λ (n) d (n)) =min f (Y (n) +λd (n )).Note: x (k+1) = y (n) +λ (n) d (n), turn Step4. STEP4: Adjusts the search direction. In the original n direction d (0), D (1),..., D (n-1), remove D0 to add D (n), constitute a new search direction, return to STEP2.iterative format of 3.2 Powell algorithmThe original Powell method is a one-dimensional search along the gradually generated conjugate direction. Now take the two-dimensional two-time objective function as an example to illustrate.
as shown, select the initial point X0 (1), the initial direction S1 (1) =e1=[1,0] '; S2 (1) =e2=[0,1] '; First round cycle:Initial point X0 (1)----> (E1,E2)----> End point X2 (1)----> Generate New Direction S (1) = X2 (1)-x0 (1); second round cycle:Initial point X0 (2)----> (e2,s (1))----> End point X2 (2)----> Generate New Direction S (2) = X2 (2)-x0 (2); we will find that point X0 (2), X2 (2) are two times along the S (1) direction one-dimensional search to a minimum point. The conjugate can be obtained: Connecting X0 (2) and X2 (2) constitute the vector s (2) and S (1) to H conjugate. Theoretically, the two-dimensional two-time positive definite function passes through one-dimensional search in this set of conjugate directions, iterating the point to reach the minimum point of the function function x*. This structure is generalized to the n-dimensional two-th positive definite function, that is, in the order of N (S (1), S (2), ..., s (n)) conjugate direction one-dimensional search can reach the limit point.
3.3 Characteristics of the original Powell algorithm1. The original Powell algorithm is also a conjugate direction method, because it only needs to calculate the target function value without the derivative value. Therefore, the Powell algorithm is more practical than the common conjugate direction method (conjugate gradient method). 2. The original Powell algorithm can be used to solve the general unconstrained optimization problem. 3. However, in the original Powell algorithm, it is necessary to keep the first n search direction linearly independent of each iteration, otherwise the optimal solution of the problem will never be reached. Defect of 3.4 Powell methodWhen the vector system in a cyclic direction group appears
linearly correlated (degenerate, morbid), the search process is carried out in the dimensionality reduction space, which causes the computation to fail to
converge .
in order to avoid this situation, a modified Powell algorithm is proposed. 4. Improved Powell algorithm4.1. Algorithm principlein order to avoid Powell method defects, Powell proposed the corresponding correction algorithm:
The main difference between this and the original Powell algorithm is that:1. In forming the k+1 Cycle Direction Group, the method of S1 (k) in the first direction of the previous cycle is not eliminated, but the function value is computed and calculated based on whether the condition is satisfied. 2. Find out the direction m and the descent m that the function values in the previous iteration method have dropped most, i.e.:
can prove: if f3<f1;
also established.
indicates that the direction of SK (N) is linearly independent of the original direction, and can be used to replace the direction of the object M SK (m). Otherwise, the k+1 round search is still performed with the original direction group. 4.2 Improved Powell algorithm calculation stepsSTEP1: Select initial data. Select the initial point x0,n a linear independent initial search direction d0,d1,..., dn-1; Given the allowable error err>0, k=0;STEP2: Perform a basic search. Make Y0=x (k), followed by a one-dimensional search along the d0,d1,..., dn-1. To everything j=1,2,..., N, Kee F (Y (j-1) +λ (j-1) d (j-1)) = Minf (y (j-1) +λd (j-1)), Y (j) = f (Y (j-1) +λ (j-1) d (J- 1)); STEP3: Checks whether the termination condition is met. Take the acceleration direction d (n) =y (n)-y (0); D (n) | | <err, the iteration terminates, the yn is the optimal solution of the problem ; otherwise Step4. STEP4: Determine the direction of the search, according to the formula to determine m, if the verification is set up, turn Step5; otherwise, turn Step6. STEP5: Adjusts the search direction. From point yn, a one-dimensional search is carried out along the direction DN, and Λn is obtained, making F(YN+ΛN*DN) =min F (YN+Λ*DN), and X (k+1) =yn+λn*dn. Then D (j) =d (j+1), j=m,m+1,..., n-1. k=k+1, return to STEP2. Step6: Do not adjust the search direction, so that X (k+1) =yn;k=k+1, return to STEP2. 4.3 typical examplesthe optimal solution of f (x) =x1*x1+2*x2*x2-4*x1-2*x1*x2 is obtained by using the modified Powell algorithm. x0=[1,1] '; convergence accuracy err=0.001.
Solution: X0 (1) =x0=[1,1] '; F1=f (x0 (1)) =-3;
the first round of loops :a one-dimensional search in the direction of an axis toward s1=[1,0]:X1 (1) = X0 (1) +a (1) *s1 (1) = [all] ' +a1 (1) [1,0] ' =[1+a1 (1), 1] ' the X1 (1) is brought into the upper formula, the derivation can be determined A1 (1) =2, the minimum value is obtained. At this time, X1 (1) =[3,1]; F (X1 (1)) =-7. A one-dimensional search in the direction of an axis toward s2=[0,1]:
X2 (1) = X1 (1) +a2 (1) *s2 (1) = [3,1] ' +[0,a2 (1)] ' =[3,1+A2 (1)] '. X2 (1) into the upper formula, the derivation can be determined A2 (1) =1/2, to obtain a minimum value. At this time, X2 (1) =[3,1.5]; F (X2 (1)) =-7.5. Verify that the terminating iteration condition is met | | X2 (1)-x0 (1) | | = 2.06>err.calculate function descent in all directions:1=f (X0 (1))-F (X1 (1)) =4; 2=f (X1 (1))-F (X2 (1)) =0.5; m=min{4,0.5}=4; map points:X (1) = 2x2 (1)-x0 (1) =2*[3,1.5] '-[1,1]=[5,2];
Conditional Validation: f3=f (X (1)) =-7, F3<f1
(F1-2F2+F3) (f1-f2-m) ^2 = 1.25 < m/2* (f1-f3) ^2 = 32 satisfied.
So, get a new search direction: S (1) =x2 (1)-x0 (1) =[3,1.5]-[1,1]=[2,1.5] ';
do a one-dimensional search along the S (1) direction:X3 (1) =x2 (1) +a3 (1) *s (1) =[3,1.5]+a3 (1) [2,1.5] = [3+2*a3 (1), 1.5+1.5A3 (1)]; when A3 (1) =0.4, the minimum value is obtained. At this time, X3 (1) =[19/5,17/10]; F (X3 (1)) =-7.9.
with X0 (2) = X3 (1) as a new starting point, a one-dimensional search along (e2,s (1)) direction begins into the second loop:X2 (0) =[19/5,17/10]; F (X2 (0)) =-7.9; a one-dimensional search along the e2=[0,1] direction:X1 (2) = [19/5,19/10]; F (X1 (2)) =-7.98 a one-dimensional search along the S (1) =[2,1.5] Direction:X2 (2) = [99/25,97/50]; F (X2 (2)) =-7.996Inspection: | | X2 (2)-x2 (0) | | = 0.288>errcontinue the iteration: 1=0.08; 2=0.016; m=0.08;
Mapping points: X (2) =2*x2 (2)-x0 (2) =[103/25,109/50]; F (X (2)) =-7.964>F1 conditions are not satisfied.
New Direction: S (2) =[99/25,97/50]-[19/5,17/10]=[4/25,12/50];
To proceed with the
iteration, take X3 (0) =x2 (2) and follow the third round of a one-dimensional search (E2,s (1)):X3 (1) =[99/25,99/50]; f1=-7.9992; X3 (2) =[3.9992,1.988]; f2=-7.99984; Inspection: | | X2 (3)-x0 (3) | | =0.0577 >Err;X (3) =[4.024,2.036] '; f3=-7.99856; F3>f1.
since S (1) is S (2) is the conjugate direction, the objective function is two functions, if a one-dimensional search along the direction of S (2) is obtained
x (2) =[4,2],f (x (2)) =-8, which is the optimal solution for the objective function.
[Theory] deep Understanding Powell optimization algorithm