An overview of particle swarm optimizationParticle swarm optimization (PSO) is a kind of swarm intelligence algorithm, which is designed by simulating the predation behavior of bird swarm. Assuming that there is only one piece of food in the area (i.e. the optimal solution as described in the usual optimization problem), the task of the flock is to find the food source. Birds in the whole process of searching, by passing each other's information, let the other birds know their position, through such collaboration, to determine whether they find the optimal solution, but also the optimal solution of the information to the entire flock, and finally, the whole flock can be gathered around the food source, that is, we said to find the best solution, That is, problem convergence.
flow of particle swarm optimization algorithmParticle swarm optimization (PSO) simulates a bird in a bird by designing a particle with no mass, which has only two properties: velocity and position, velocity represents the speed of movement, and position represents the direction of movement. Each particle searches for the optimal solution separately in the search space, and it is recorded as the current individual extremum, and the individual extremum is shared with other particles in the whole particle swarm, and the optimal individual extremum is found as the current global optimal solution of the whole particle swarm. All the particles in the particle swarm adjust their speed and position according to the current global optimal solution of the current individual extremum and the entire particle swarm. The idea of particle swarm optimization is relatively simple, mainly divided into: 1, initialize the particle swarm, 2, evaluate the particle, calculate the adaptive value, 3, find the individual extremum, 4, find the Global optimal solution, 5, modify the velocity and position of the particle. Here is the flowchart of the program:
(PSO process)
Below we explain each step of the flowchart in detail:
1. InitializeFirst, we need to set the maximum speed range to prevent exceeding the maximum interval. The location information is the entire search space, and we randomly initialize the speed and position in the speed range and the search space. Set the group size.
2. Individual extremum and global optimal solutionThe individual extremum finds the best position information in the history of each particle, and finds a global optimal solution from these individual historical optimal solutions, and compares it with the historical optimal solution, and chooses the optimum as the current historical optimal solution.
3, update the speed and position of the formulaThe update formula is:
Which, called the inertial factor, and called the acceleration constant, is generally taken. Represents a random number on a range. The dimension that represents the individual extremum of the first variable. Represents the dimension of the global optimal solution.
4. Termination conditionsThere are two kinds of termination conditions that can be selected, one is the maximum algebra:; the other is that the deviations between the two adjacent generations stop within a specified range. We chose the first in the experiment.
third, the experimentThe test function We selected is: Griewank. Its basic form is as follows:
The image is:
(Griewank function image) in the experiment we selected the dimension is the 20;matlab program code as follows: Main program:
c1=2;% Learning Factor c2=2;% learning factor dimension=20;
size=30;
tmax=500;
velocity_max=1200;% particle maximum velocity f_n=2;% test function name fun_ub=600;% functions Upper and lower bounds fun_lb=-600; Position=zeros (dimension,size);% particle position velocity=zeros (dimension,size);% particle speed Vmax (1:dimension) =velocity_max;%
Particle velocity Upper bound Vmin (1:dimension) =-velocity_max;
Xmax (1:dimension) =fun_ub;% particle position upper bound, i.e. the upper and lower bounds Xmin (1:dimension) =fun_lb of function arguments;
[Position,velocity]=initial_position_velocity (Dimension,size,xmax,xmin,vmax,vmin); The pbest_position=position;% particle's historical optimal position, the initial value is the starting position of the particle, storing the historical optimal position of each particle gbest_position=zeros (dimension,1); The position of the particle where the
The initial value is considered to be the 1th particle for j=1:size pos=position (:, j);% of column J, which is the position of the J-Particle, FZ (j) =fitness_function (pos,f_n,dimension);% calculates the adaptive value of the J-particle End [Gbest_fitness,i]=min (FZ);% to find the smallest adaptive value of all adaptive values and obtain the position of the particle gbest_position=position (:, i); the position of the particle with the minimum adaptive value, i.e. I-column for
Itrtn=1:tmax time (ITRTN) =itrtn;
Weight=1;
R1=rand (1);
R2=rand (1); For I=1:size Velocity (:, i) =weight*velocity (:, i) +c1*r1* (pbest_position (:, i)-position (:, i)) +c2*r2* (gbest_
Position-position (:, i));
End% limit speed boundary for i=1:size For row=1:dimension if Velocity (row,i) >vmax (Row) veloctity (row,i) =vmax (row);
ElseIf Velocity (row,i) <vmin (Row) veloctity (row,i) =vmin (row);
Else end end end Position=position+velocity; % limit location boundary for i=1:size for Row=1:dimension if Position (row,i) >xmax (Row) Position (row,i) =xmax (Row)
;
ElseIf Position (row,i) <xmin (Row) Position (row,i) =xmin (row); Else end end end for J=1:size P_position=position (:, j) ';% takes a particle's position fitness_p (j) =fitness_function (P
_position,f_n,dimension);
If Fitness_p (j) < FZ (j)% The adaptive value of the particle is better than the adaptive value prior to the movement, the original Adaptive Value Pbest_position (:, J) =position (:, j) is updated;
FZ (j) =fitness_p (j);
End If Fitness_p (j) <gbest_fitness Gbest_fitness=fitness_p (j); End End [Gbest_fitness_new,i]=min (FZ); The adjusted value of all particles after the update, take the smallest one, and find its number best_fitness (ITRTN) =gbest_fitness_new; % record the best adaptive value for each generation Gbest_position=pbest_position (:, I);It is best to adapt to the individual position of the corresponding value end plot (time,best_fitness);
Xlabel (' Number of iterations '); Ylabel (' Fitness value p_g ');
Initialization
function [Position,velocity] = initial_position_velocity (dimension,size,xmax,xmin,vmax,vmin) for
i=1:Dimension
Position (i,:) =xmin (i) + (Xmax (i)-xmin (i)) *rand (1,size);% produces a random position within a reasonable range, and rand (1,size) is used to produce a row of Size random numbers
Velocity (i,:) =vmin (i) + (Vmax (i)-vmin (i)) *rand (1,size);
End
End
Adaptive value calculation:
function Fitness=fitness_function (pos,f_n,dimension)
Switch f_n case
1
func_sphere=pos (:) ' *pos (:);
Fitness=func_sphere;
Case 2
res1=pos (:) ' *pos (:)/4000;
res2=1;
For Row=1:dimension
Res2=res2*cos (Pos (Row)/sqrt (row));
End
func_griewank=res1-res2+1;
Fitness=func_griewank;
End
The Final convergence curve:
(Convergence curve)