Mathematical modeling Method-particle swarm optimization

Source: Internet
Author: User
Tags cos rand sin vmin

First, Introduction

Hello everyone, there is a period of time not updated blog, recently the body is not very comfortable ha, today began to continue even more. To get to the chase, this is about "particle swarm optimization". The algorithm was inspired by two scientists in 1995, based on the research of bird predation. OK, let's get started.

second, bird predation behavior

mother Bird has 7 baby birds, one day, mother bird let the birds go to find worms themselves to eat. So the baby birds started a wide range of predatory behavior. At first the birds don't know where to find the worm, so each baby is looking for it in a different direction.

But in order to be able to find the worm faster, birds and children to negotiate good, who found the worm, said to each other.

Looking for a while, finally there is a baby bird (known as the small blue), it seems that there is not far near him there is a trail of insects. So it sent to other birds, other baby birds, after receiving the message, the side began to change the trajectory, fly to the small blue side. Eventually, as little blue gets closer to the worm. The other babies are almost all gathered to the small blue side. In the end, everyone ate the worm.

three, particle swarm algorithm3.1 Story Analysis

The story of bird feeding is precisely the reason why this particle swarm algorithm exists. So, if you want to get a better understanding of particle swarm optimization, we're going to analyze the story of baby bird predation. First of all, we analyze and analyze the state of the birds ' movements, that is, how the baby birds decide their own flight speed and position.

(1) Now, first of all, we know that the object is inertia, the baby bird in the beginning of the flight, no matter how it wants to fly next time, to which direction to fly, it has a inertia, it must be based on the current speed and direction to the next adjustment. Well, that's understandable, so, " inertia "--the current speed $v_{current}$ is a factor.

(2) Second, because the bird baby long-term predation, so the baby bird has experience, although it does not know where the existence of the worm, but it can probably know where the insect distribution. For example, when the birds fly to barren places, it must know that there is no worm, so in the heart of the bird, it flies every time, it will be based on their own experience to find, such as the distribution of insects in the past, it is definitely a priority to this part of the place to search. Therefore, their " cognition "--experience, is also a factor.

(3) Finally, each baby bird found himself closer to the worm, will send a signal to the companion, in the encounter such a signal, the rest of the birds are still looking for the baby will think, the location of the companion is closer to the worm, I want to go over there to see it. We call it " social sharing ", which is also a factor to consider when a bird is moving.

To sum up, each time the baby bird decides the speed and direction of the next move, the three factors affect the brain. We thought that if we could use a formula to describe the effects of three factors, it would not be possible to write the direction and speed of each baby's movement. However, every baby bird, the consideration of these three factors are inconsistent, for example, for birds of prey experience is not high, the movement will be more value of peer sharing information , and for the predator-prey experience of the baby bird, the more important to their own experience . Therefore, our formula, in the "cognitive" and "social sharing" this part, is random.

3.2 Formula Expression

Our particle swarm algorithm is this:

(1) Every baby bird is our solution , called "particle", and our worm is the optimal solution to our problem. That is to say, the bird hunts the process, that is, all the particles in the solution space to find the best solution.

(2) Each particle is determined by a fitness function to determine the fitness value, in order to determine the position of the particle is good or bad. (This is actually the "experience" part of mimicking the baby bird, by fitness function to determine whether the position is called barren or where the worm may be).

(3) Each particle is given a memory function that remembers the best place to search.

(4) Each particle has a speed and determines the distance and direction of the flight, as determined by its own flight experience and information shared by the companion.

Now, in D-dimensional space, there are n particles. which

Position of particle I:

$X _{i} = (X_{i1}, X_{i2}, \cdot \cdot \cdot, x_{id}) $

Velocity of particle I:

$V _{i} = (V_{i1}, V_{i2}, \cdot \cdot \cdot, v_{id}) $

The best place to find the particle I (personal):

$Pbest _{id} = (Pbest_{i1}, Pbest_{i2}, \cdot \cdot \cdot, pbest_{id}) $

The best position the population has ever experienced:

$Gbest _{d} = (Gbest_{1}, gbest_{2}, \cdot \cdot \cdot, gbest_{d}) $

In addition, we have to give a limit to the speed and position of our particles, after all, we do not want the baby birds to be too fast or to fly out of the range we want to search. So for each particle I, there are:

$X _{i} \in [X_{min}, x_{max}]$

$V _{i} \in [V_{min}, v_{max}]$

Based on the previous analysis, we can write the following two formulas:

    • In D-dimensional space, the velocity update formula for particle i is:

$V _{i}^{k} = Wv_{i}^{k-1}+c_{1}rand () \left (Pbest_{i}-x_{i}^{k-1} \right) + C_{2}rand () \left (Gbest_{i}-X_{i}^{k-1} \right) $

    • In D-dimensional space, the position update formula for particle i is:

$X _{i}^{k} = x_{id}^{k-1}+ v_{i}^{k-1}$

In the above formula, superscript k-1 and K represent the process of particles from the k-1 flight operation to the next flight operation.

(1) We first look at the speed update formula, we can see that there are three parts, respectively, we analyzed the " inertia ", " cognition ", " social sharing " three pieces. RAND () and Rand () provide the randomness of "cognition" and "social sharing", which means that each particle is not the same value for the two parts. Where C_{1} and c_{2} are our own Plus, we have our own control over the components of these two parts. In addition, W is a weight of inertia, which is used to adjust the search scope of the solution space. About the appearance of the W there is a history, we are interested to check the paper, here is not detailed.

(2) Next is the position update formula, this part is very simple, we all know $x = x_{0} + vt$. But why is there a $t$ here, we can simply understand that from the k-1 flight to the next flight, the cost of a unit of time $t$, so there is no $t$ appear.

Well, the above two formulas are the core of particle swarm optimization.

3.3 Algorithmic Flow

(1) Initial:

Initializes the particle population (population size is n), and the information for each particle includes random positions and random velocities.

(2) Evaluation

The fitness value of each particle is calculated according to the fitness function.

(3) Find the Pbest

For each particle, compare its current fitness value with the fitness value corresponding to the best position in history (Pbest). If the current fitness value is higher, the current location is updated pbest.

(4) Find the Gbest

For each particle, compare its current fitness value with the fitness value corresponding to the best position in the population history (gbest). If the current fitness value is higher, the current location is updated gbest.

(5) Update the Velocity and Position:

Update the velocity and position of each particle according to the formula

(6) If the best value is not found, return to step 2. (Stop algorithm if the optimal number of iterations is reached or the increment of the best fitness value is less than the given threshold)

Four, particle swarm optimization matlab code example4.1 Calculating the maximum value of a unary function using particle swarm optimization
Percent problem 1: using particle swarm optimization algorithm to calculate the maximum value of a unary function. Empty environment Clcclear all%% Ii. Plot the target function graph x = 1:0.01:2;y = sin (10*pi*x)./X;figureplot (x, y) hold on%% iii.                                        Parameter initialization C1 = 1.49445;C2 = 1.49445;maxgen = 50;                                       % Evolution (number of iterations) Sizepop = 10;                                      % population size (number of particles) dimension = 1; % here because it is the solution of a unary function, that is, one-dimensional, so the number of columns is 1% speed of the boundary of Vmax = 0.5; Vmin = -0.5;% population boundary Popmax = 2;popmin = 1;% is used to calculate inertia weights, experience value ws = 0.9;we = 0.4;% Pre-allocates memory to matrix pop = zeros (sizepop, dimension); V = Zeros (Sizepop, dimension); fitness = Zeros (Sizepop, 1); yy = zeros (Maxgen); w = zeros (Maxgen);             Produces the initial particle and velocity for i = 1:sizepop% randomly produces a population pop (i,:) = (Rands (1) + 1)/2 + 1;                       % Initial population V (i,:) = 0.5 * Rands (1);   % initialization speed% calculation of fitness Fitness (i) = Fun (Pop (i,:));                           end%% v. Initialize personal best and global best[bestfitness, bestindex] = max (fitness); gbest = Pop (bestindex,:);                                       % Global bestpbest = Pop; The individual data at the beginning of the% is the best fitnesspbest = fitness;                         % of individual optimal fitness value fitnessgbest = bestfitness; % global best fit value percent of VI.           Iterative optimization for i = 1:maxgen% Iteration Loop W (i) = WS-(ws-we) * (I/maxgen); % lets the inertia weights change dynamically with the number of iterations, controlling the search precision for j = 1:sizepop% traversing all particles% Speed update V (j,:) = W (i) *        V (J,:) + c1*rand* (pbest (J,:)-pop (J,:)) + c2*rand* (Gbest-pop (J,:));        If V (j,:) > Vmax V (j,:) = Vmax;        End If V (J,:) < Vmin V (j,:) = Vmin;        End% Population update (location update) pop (j,:) = Pop (j,:) + V (j,:);        If Pop (j,:) > Popmax pop (j,:) = Popmax;        End If Pop (J,:) < popmin pop (j,:) = Popmin;     End% Fitness Value Update Fitness (j) = Fun (Pop (j,:)); End for j = 1:sizepop% individual optimal update if fitness (j) > Fitnesspbest (J) Pbest (j,:) = Pop (j   , :);         Fitnesspbest (j) = Fitness (j);            End% population optimal Update if fitness (j) > Fitnessgbest gbest = Pop (j,:);        Fitnessgbest = Fitness (j);                               End End YY (i) = fitnessgbest; % The population optimal solution end%% VII is recorded for each iteration. Output and plot [fitnessgbest gbest]% output data Figureplot (yy) title (' Optimal Individual fitness ', ' fontsize ', ' n '); Xlabel ( ' Evolutionary algebra ', ' fontsize ', ' n ' ylabel (' fitness ', ' fontsize ', 12);

Where the adaptive value function is encapsulated in the FUN.M, as follows:

Function y = fun (x)% functions are used to calculate particle fitness value%x input           particle%y           output          particle fitness value y = sin (Ten * pi * x)/x;

Run the above code and get the following data and graphs:

Ans =    0.9528    1.0490

As you can see, the red dot is at the maximum value of our function.

4.2 Using particle swarm optimization to calculate the maximum value of a two-tuple function
Percent Problem 2: using particle swarm optimization algorithm to calculate the maximum value of two-element function. Empty environment clcclear%% Ii. Draw the target function curve figure[x, y] = Meshgrid ( -5:0.1:5, -5:0.1:5); z = x.^2 + y.^2-10*cos (2*pi*x)-10*cos (2*pi*y) + 20;mesh (x, Y, z) Hold on%% III.                                                  Parameter initialization C1 = 1.49445;C2 = 1.49445;maxgen = 1000;                                                  % Evolutionary number Sizepop = 100;                                                  % population size dimension = 2; % here because it is the solution of the two-dollar function, namely two-dimensional, so the number of columns is 2% velocity of the boundary of Vmax = 1; Vmin = -1;% population boundary Popmax = 5;popmin = -5;% is used to calculate inertia weights, experience value ws = 0.9;we = 0.4;% Pre-allocates memory to matrix pop = zeros (sizepop, dimension); V = Zeros (Sizepop, dimension); fitness = Zeros (Sizepop, 1); yy = zeros (Maxgen); w = zeros (Maxgen);                                Produces the initial particle and velocity for i = 1:sizepop% randomly produces a population pop (i,:) = 5 * rands (1, 2);                                      % Initial population V (i,:) = Rands (1, 2); % initialization speed% calculation of fitness Fitness (i) = Fun (Pop (i,:)); end%% v. Initialize personal best and global best[bestfitness, bestindex] = max (fitn ESS); gbest = Pop (bestinDex,:);                                                    % Global bestpbest = Pop;                                         % individual best fitnesspbest = fitness;                                     % of individual optimal fitness value fitnessgbest = bestfitness; % global best fit value percent of VI.                       Iterative optimization for i = 1:maxgen W (i) = WS-(ws-we) * (I/maxgen); % lets the inertia weights change dynamically with the number of iterations, controlling the search accuracy for j = 1:sizepop% Speed update V (j,:) = W (i) *v (j,:) + c1*rand* (pbest (J,:)-P        OP (J,:)) + c2*rand* (Gbest-pop (J,:));            For k = 1:dimension if V (j, K) > Vmax V (j, k) = Vmax;            End If V (J, K) < Vmin V (j, k) = Vmin;        End end% population update (location update) pop (j,:) = Pop (j,:) + V (j,:);            For k = 1:dimension if Pop (j, k) > Popmax pop (j, k) = Popmax;            End If Pop (J, K) < Popmin pop (j, k) = Popmin;  End      End% Fitness Value Update Fitness (j) = Fun (Pop (j,:)); End for j = 1:sizepop% individual optimal update if fitness (j) > Fitnesspbest (J) Pbest (j,:) = Pop (j            , :);        Fitnesspbest (j) = Fitness (j);            End% population optimal Update if fitness (j) > Fitnessgbest gbest = Pop (j,:);        Fitnessgbest = Fitness (j);                                           End End YY (i) = fitnessgbest; % of the population optimal solution for each iteration end%% vii. output results [Fitnessgbest, GBEST]PLOT3 (gbest (1), gbest (2), Fitnessgbest, ' Bo ', ' linewidth ', 1.5) Figureplot (yy) title (' Optimal Individual fitness ', ' fontsize ', ' n '), Xlabel (' Evolutionary algebra ', ' fontsize ', '), Ylabel (' Fitness ', ' fontsize ', 12);

  Where the adaptive value function is encapsulated in the FUN.M, as follows:

Function y = fun (x)% functions are used to calculate particle fitness values of%x input           particles%y           output          particle fitness value y = x (1). ^2 + x (2). ^2-10*cos (2*pi*x (1))- 10*cos (2*pi*x (2)) + 20;

Run the above code and get the following data and graphs:

Ans =   80.7066    4.5230    4.5230

As you can see, the place where the figure is marked is the maximum value we have obtained. In fact, we know that for this function, because it is symmetric, so 4 points are the same maximum value, which is the disadvantage of PSO algorithm. It is easy to get into local optimal solution. Because our particle swarm algorithm does not know what the optimal solution is, it just keeps the particle looking for a solution that is the best solution. Therefore, such particle swarm optimization algorithm is limited, when used in accordance with the occasion to choose.

Mathematical modeling Method-particle swarm optimization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.