Particle swarm Algorithm (1) Introduction to----particle swarm algorithm __ algorithm

Source: Internet
Author: User
Tags cas cos flock rand value of pi

A history of particle swarm optimization

Particle swarm optimization (Complex Adaptive system,cas) is derived from the complex adaptive system. CAS theory was formally introduced in 1994, and CAS members are called principals. For example, the study of bird systems, in which each bird is called the subject. The subject is adaptable, it can communicate with the environment and other subjects, and change its structure and behavior according to the process of communication "learning" or "accumulating experience". The evolution or evolution of the entire system includes the creation of a new level (the birth of a bird), the emergence of differentiation and diversity (the birds in the flock are divided into many small groups), and the emergence of new subjects (birds are constantly discovering new foods during their search for food).

Therefore, the main body of CAS system has 4 basic characteristics (these characteristics are the basis of the evolution of particle swarm optimization algorithm):

First of all, the subject is active and active.

The subject and the environment and other subjects interact and interact with each other, and this influence is the main motive force of the system development and change.

The influence of the environment is macroscopic, the influence between the subject is microscopic, and the macroscopic and microcosmic are organically combined.

Finally, the entire system may be affected by a number of random factors.

Particle swarm optimization (PSO) is a study of a CAS system, the avian social system.

Particle swarm optimization (particle Swarm optimization, PSO) was first proposed by Eberhart and Kennedy in 1995, and its basic concept stems from the study of bird foraging behavior. Imagine a scene in which a group of birds is randomly searching for food, there is only one piece of food in the area, and all the birds don't know where the food is, but they know how far away the food is. So what's the best strategy for finding food? The simplest and most effective way is to search the area around the bird that is currently closest to the food.

The PSO algorithm is inspired by the behavior characteristics of the biological population and is used to solve the optimization problem. In PSO, the potential solution of each optimization problem can be imagined as a point in the D-dimensional search space, which we call "particle" (particle), and all particles have an adaptive value (Fitness value) determined by the target function. Each particle has a speed that determines the direction and distance of their flight, and the particles follow the current optimal particle to search the solution space. Reynolds's study on bird flight was found. The bird is only tracking its limited number of neighbors but ultimately the overall result is that the whole flock seems to be under the control of a center. The complex global behavior is caused by the interaction of simple rules.

Second, the specific expression of particle swarm optimization algorithm

Over the long-winded, those are scientific research workers write the tone of the paper, but the history of PSO as above said. The following popular explanation of PSO algorithm.

PSO algorithm is the process of simulating a group of birds looking for food, each bird is the particle in PSO, that is, we need to solve the problem of the possible solution, these birds in the process of searching for food, constantly change their position and speed in the Air flight. We can also observe that the birds in the process of looking for food, began to disperse birds, gradually these birds converge into a group, the group is suddenly low, left and right, until the end to find food. This process transforms us into a mathematical problem. Look for the maximum value of the function Y=1-cos (3*x) *exp (-X) at [0,4]. The graph of the function is as follows:

When x=0.9350-0.9450, the maximum value of y=1.3706 is reached. In order to get the maximum value of the function, we randomly sprinkle some points between [0,4], in order to demonstrate, we place two points, and calculate the function values of the two points, at the same time set the two points at a speed between [0,4]. The following points will change their position according to a certain formula, after reaching the new location, calculate the values of the two points, and then update their position according to a certain formula. Until the end of the y=1.3706 at this point stop their own updates. The process is compared with the particle swarm algorithm as follows:

These two points are particles in the particle swarm algorithm.

The maximum value of the function is the food in the flock.

Calculating two point function values is the adaptive value in particle swarm optimization, and the function of calculation is the fitness function in particle swarm optimization.

The formula for updating the position of the particle swarm algorithm is the formula of the position velocity update.

The following is a demonstration of the approximate process of running this algorithm:

First time initialization

Update location for the first time

Second Update location

21st Update

Final result (30 iterations)

Finally, all points are centered at the maximum value.

Oh, now the approximate idea of particle swarm optimization is here. The following section describes the standard particle swarm algorithm.

Particle swarm optimization (2)----standard particle swarm optimization algorithm

In the previous section of the narrative, the only thing not introduced to you is the function of these random points (particles) is how to move, just say according to a certain formula update. This formula is the position Velocity update formula in particle swarm optimization. Here's a description of what this formula is. In the previous section, we obtained the maximum value of the function Y=1-cos (3*x) *exp (-X) at [0,4]. and two random points are placed between [0,4], the coordinates of these points are assumed to be x1=1.5; x2=2.5; the point here is a scalar, but the problem we often encounter may be a more general case--x for a vector case, such as a two-dimensional case z=2*x1+3*x22 case. At this time each of our particles is two-dimensional, p1= (x11,x12), p2= (x21,x22), p3= (x31,x32), ... Pn= (XN1,XN2). Here n is the size of the particle swarm, the number of particles in the group, and the number of dimensions per particle is 2. More generally, the dimension of the particle is Q, so that there are n particles in the population, and each particle is Q dimension.

A group of n particles searches for the space of Q-Dimension, which is the dimension of each particle. Each particle is represented as: xi= (Xi1,xi2,xi3,..., xiq), each particle corresponding to the speed can be expressed as vi= (Vi1,vi2,vi3,...., viq), each particle in the search to consider two factors:

1. Its own search for the historical optimal value of pi, pi= (pi1,pi2,...., piQ), i=1,2,3,...., N.

2. The optimal value of all particles pg,pg= (pg1,pg2,...., PgQ), note that there is only one PG here.

Here is a formula for updating the particle swarm algorithm's position velocity:

Here are a few important parameters that you need to remember, because you will often use them in future lectures:

They are:

is the coefficient of maintaining the original speed, so called the inertia weight.

It is the weight coefficient of the particle tracking its own historical optimal value, it represents the understanding of the particle itself, so called "cognition". typically set to 2.

is the weight coefficient of particle tracking group optimal value, it represents the understanding of the whole group knowledge of particles, so called "social knowledge", often called "society". typically set to 2.

is a random number evenly distributed within the [0,1] interval.

is a factor that is added to the velocity in front of the position update, which we call the constraint factor. typically set to 1.

This is the end of a standard particle swarm algorithm.

The following is a simple graphical representation of the entire basic particle swarm process:

The termination condition is determined by setting the adaptive value to a certain value or a certain number of cycles.

Note: The particle here is a standard particle swarm optimization algorithm that tracks its own historical optimal values and global (group) optimal values to change its position pre-speed, so it is called the global version.

Particle swarm Algorithm (3)----standard particle swarm algorithm (partial version)

In the global version of the standard particle swarm algorithm, the speed of each particle is updated according to two factors, the two factors are: 1. The particle's own historical optimal value pi. 2. Global optimal value of the particle population pg. If you change the particle velocity update formula, let the update of the speed of each particle update according to the following two factors, A. Particle's own historical optimal value pi. B. The optimal value of particles in the neighborhood of a particle is pnk. The rest remains the same as the global version of the standard particle swarm algorithm, the algorithm becomes the local version of the particle swarm algorithm.

In general, the neighborhood of a particle I increases gradually with the increase of the number of iterations, the first iteration begins, its neighborhood is 0, and as the iteration number of the neighborhood is linearly larger, the last neighborhood expands to the whole particle swarm, then it becomes the global version of the particle swarm algorithm. It is proved by practice that the global version of PSO algorithm converges fast, but it is easy to get into local optimal. The local version of the PSO algorithm converges slowly, but it is difficult to get into the local optimal. Most of the particle swarm optimization algorithms are focused on the convergence speed and the partial optimization. In fact, these two aspects are contradictory. See how a better compromise.

The local version of particle swarm algorithm has many different implementations according to the method of taking neighborhood.

The first method: according to the number of particles to the neighborhood of the particle, there are four kinds of methods: 1, Annular Method 2, the random ring Method 3, the wheel shape Method 4, the random wheel shape.

1 Ring 2 random ring

3 Wheel Shape 4 random wheel shape

Since there is an algorithm that is implemented by the Ring method, the ring is done here a little bit of explanation: in the case of particle 1, when the neighborhood is 0, the neighborhood is itself, when the neighborhood is 1 o'clock, the neighborhood is 2, 8; When the neighborhood is 2 o'clock, the neighborhood is 2,3,7,8, and so on, until the neighborhood is 4, this time , the neighborhood expands to the entire example population. According to the literature (foreign literature), using the wheel-shaped topological structure, the effect of PSO is very good.

The second method: take the neighborhood of particles by the Euclidean distance of the particles

In the first method, according to the number of particles to get the neighborhood of the particles, but these particles may actually be not adjacent to the actual position, so Suganthan proposed a space-distance-based partitioning scheme, in the iteration to calculate each particle and the other particles in the group distance. Note that the maximum distance between any 2 particles is DM. For each particle according to | | xa-xb| | /DM calculates a ratio. where | | xa-xb| | is the distance from the current particle A to B. The selection threshold, Frac, varies based on the number of iterations. When another particle b satisfies | | xa-xb| | /dm<frac, B is considered to be the neighborhood of the current particle.

This method has been tested and achieved good application results, but because of the calculation of the distance between all particles, the computational capacity is large, and requires a lot of storage space, so it is generally not used frequently.

Particle swarm optimization (4)----particle swarm algorithm classification

Particle swarm optimization is divided into 4 major branches:

(1) Deformation of the standard particle swarm algorithm

In this branch, the C1 of inertia factor, convergence factor (constraint factor) and "cognition" part of the standard particle swarm optimization algorithm, the c2 of "social" part are changed and adjusted, hoping to get good effect.

The original version of the inertial factor is kept constant, and later it is suggested that the inertia factor needs to be reduced gradually with the iterative algorithm. At the beginning of the algorithm, the large inertial factor can be the algorithm is not easy to get into the local optimal, to the later stage of the algorithm, the small inertia factor can accelerate the convergence speed, so that the convergence is more stable, not to appear oscillation phenomenon. After I test, the dynamic reduction of inertia factor W, it can make the algorithm more stable, the effect is better. But what is the method used to decrease the inertia factor? The first thing people think of is the linear decline, this strategy is really good, but is not the best of it. Therefore, some people have studied the strategy of decreasing, the results indicate that: the decreasing of linear function is better than the decreasing strategy of convex function, but the descending strategy of concave function is better than that of linear, after I test, the experimental results basically accord with this conclusion, but the effect is not very obvious.

For convergence factor, it is proved that if the convergence factor takes 0.729, the convergence of the algorithm can be ensured, but it can't guarantee the convergence of the algorithm to the global optimal, after I test, the convergence factor of 0.729 is better. For the social and cognitive coefficients c2,c1 is also proposed: C1 first large and small, and C2 first small after the big idea, because in the early stages of the algorithm, each bird to have a large part of their own cognition and relatively small social parts, this and our own group of people looking for things closer, because in the early days we find things, We basically rely on our own knowledge to find, and later, we accumulate more and more experience, so we began to gradually reach consensus (social knowledge), so we began to rely on social knowledge to find things.

In 2007, two Greek scholars proposed that the global version of the fast convergence rate of the particle swarm algorithm and not easily into the local optimal local version of the particle swarm algorithm, the use of the formula is

V=N*V (Global Version) + (1-N) *v (local version) Speed update formula, V for Speed

W (k+1) =w (k) +V position update formula

In the literature, this algorithm discusses the situation of the coefficients n taking various situations, and runs 20,000 times to analyze the results of various coefficients.

(2) Mixing of particle swarm optimization algorithms

This branch mainly mixes the particle swarm algorithm with various algorithms, some of which are mixed with the simulated annealing algorithm, some people combine it with the simplex method. But the most is to mix it with the genetic algorithm. According to the genetic algorithm, three different operators can generate different hybrid algorithms in 3.

The combination of particle swarm optimization and selection operator, here is the idea of mixing: in the original particle swarm algorithm, we select the optimal value of the particle swarm population as PG, but the combined version is based on the size of the fitness of all particles to give each particle a selected probability, and then based on the probability of these particles selected, The selected particles are used as PG and the other conditions are the same. Such algorithms can maintain the diversity of the particle swarm while the algorithm is running, but the fatal disadvantage is that the convergence rate is slow.

The combination of particle swarm algorithm and hybrid operator, combined with the idea of genetic algorithm, the same as the basic, in the process of the algorithm according to the size of the degree of fitness, particles can be 22 hybridization between, for example, with a very simple formula

W (new) =nxw1+ (1-n) xw2;

W1 and W2 are the father particles of this new particle. This algorithm can introduce new particles in the process of the algorithm, but once the algorithm falls into the local optimal, the PSO algorithm will be difficult to get rid of the local optimal.

The combination of particle swarm algorithm and mutation operator, combined with the idea: test all particles and the current optimal distance, when the distance is less than a certain value, you can take a percentage of all particles (such as 10%) of the particles to be randomly initialized, let these particles re-search for the best value.

(3) binary particle swarm optimization algorithm

The initial PSO was developed from solving the problem of continuous optimization. Eberhart and so on, the discrete binary plate of PSO is proposed. To solve the combinatorial optimization problem in engineering practice. In the proposed model, each dimension of the particle and the particle itself are limited to 1 or 0, and the speed is not limited. When the position is updated with speed, a threshold is set, and when the speed is higher than the threshold, the position of the particle is 1, otherwise 0 is taken. Binary PSO and genetic algorithm are similar in form, but experimental results show that in most test functions, binary PSO is faster than genetic algorithm, especially when the dimension of the problem increases.

(4) Cooperative particle swarm optimization (PSO)

In conjunction with PSO, the D-dimension of the particle is divided into the D-particle group, each particle swarm optimizes one-dimensional vectors, and these components are combined into a full vector when evaluating fitness. For example, the first particle group, in addition to the first component I, the other D-1 components are set to the optimal value, and the particles in the group I are constantly replaced by the first component I, until the optimal value of the I-dimensional, the other dimensions are the same. To divide the associated component into a group, the D-dimensional vector can be assigned to M-Particle swarm optimization, then the dimension of the first D mod m particle swarm is d/m. The dimension of the post m (D mod m) particle swarm is the downward rounding of the d/m. Cooperative PSO has a faster convergence rate on some problems, but the algorithm is easy to be deceived.

There are 4 branches of the basic particle swarm optimization algorithm, most of which are around the 4 branches in the change, the particle swarm optimization algorithm is the majority, fundamentally, almost nothing new ideas of the proposed.

Particle swarm optimization (5)-----The implementation of standard particle swarm optimization algorithm

The implementation of the standard particle swarm optimization algorithm is based on particle swarm optimization (2)----standard particle swarm optimization. Mainly divided into 3 functions. The first function is a particle swarm initialization function

Initswarm (Swarmsize ... ADAPTFUNC) Its main function is to initialize particles of the particle swarm and set the velocity and position of the particles within a certain range. The data structure used by this function is as follows:

Table Parswarm records the position of the particle, the velocity and the current fitness value, we use W to represent the position, V to represent the speed, and F to represent the current fitness value. Here we assume that the number of particles is n, and the dimension of each particle is d.

w1,1

w1,2

...

W1,d

v1,1

v1,2

...

V1,d-1

V1,d

F1

A 1th particle

w2,1

w2,2

...

W2,d

v2,1

v2,2

...

V2,d-1

V2,d

F2

A 2nd particle

...

...

...

...

...

...

...

...

...

...

.......

wn-1,1

wn-1,2

...

Wn-1,d-1

vn-1,1

vn-1,2

...

Vn-1,d-1

Vn-1,d

FN-1

The first N-1 particles

wn,1

wn,2

...

Wn,d

vn,1

vn,2

...

Vn,d-1

Vn,d

Fn

Nth particle

Table Optswarm records each particle's historical optimal solution (particle history's best fit) and the global optimal solution to which all particles are searched. Using WG to represent the global optimal solution, w.,1 represents the historical optimal solution of each particle. The first n rows of the particle swarm initialization stage table Optswarm are the same as in table Parswarm, and the WG value is the row for the maximum value of the fitness value in table Parswarm.

wj,1

wj,2

...

Wj,d-1

Wj,d

The historical optimal solution of the 1th particle

wk,1

wk,2

...

Wk,d-1

Wk,d

The historical optimal solution of the 2nd particle

...

...

...

...

...

...

wl,1

wl,2

...

Wl,d-1

Wl,d

The historical optimal solution of the first N-1 particle

wm,1

wm,2

...

Wm,d-1

Wm,d

The historical optimal solution of nth particle

wg,1

wg,2

...

Wg,d-1

Wg,d

The historical optimal solution of global particles

According to this idea MATLAB code is as follows:

function [Parswarm,optswarm]=initswarm (SWARMSIZE,PARTICLESIZE,PARTICLESCOPE,ADAPTFUNC)

% function Description: Initializes the particle swarm, limits the particle swarm's position and velocity within the specified range

%[parswarm,optswarm,badswarm]=initswarm (Swarmsize,particlesize,particlescope,adaptfunc)

%

% input parameter: swarmsize: Number of population size

% input parameter: particlesize: The dimension of a particle

% input Parameters: Particlescope: The range of the dimensions of a particle in the operation;

% Particlescope Format:

% 3-D particle particlescope format:

% [X1min,x1max

% X2min,x2max

% X3min,x3max]

%

% input parameters: Adaptfunc: Fitness function

%

% output: parswarm initialized particle swarm

% output: optswarm particle swarm current optimal solution and global optimal solution

%

% usage [Parswarm,optswarm,badswarm]=initswarm (SWARMSIZE,PARTICLESIZE,PARTICLESCOPE,ADAPTFUNC);

%

% exception: First ensure that the file is in the search path of MATLAB, and then view the relevant prompt information.

%

% Coding Person: XXX

% Compilation time: 2007.3.26

% References: none

%

% Fault Tolerant control

If nargin~=4

Error (' input parameter number is incorrect. ')

End

If nargout<2

Error (' The number of output parameters is too small to guarantee future operation. ');

End

[Row,colum]=size (Particlesize);

If row>1|colum>1

Error (' The dimension of the input particle is wrong, is a 1 row and 1 column of data. ');

End

[Row,colum]=size (Particlescope);

If row~=particlesize|colum~=2

Error (' The dimension range of the input particle is wrong. ');

End

% Initialize particle swarm matrix

% Initialize particle swarm matrix, all set to [0-1] random number

%rand (' state ', 0);

Parswarm=rand (swarmsize,2*particlesize+1);

% adjust the range of position and velocity in the particle swarm

For K=1:particlesize

Parswarm (:, K) =parswarm (:, k) * (Particlescope (k,2)-particlescope (k,1)) +particlescope (k,1);

% adjust speed to align speed with position range

Parswarm (:, Particlesize+k) =parswarm (:, particlesize+k) * (Particlescope (k,2)-particlescope (k,1)) +ParticleScope (K, 1);

End

% calculates the value of its fitness function for each particle

For K=1:swarmsize

&n

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.