C language Implementation particle swarm optimization (PSO) Two

Source: Internet
Author: User
Tags cos rand sin vmin

Last time, the implementation of the elementary particle swarm algorithm was discussed, and the C language code was given. This article mainly explains an important parameter affecting particle swarm optimization---w. We have already said that the core of the particle swarm algorithm is two formulas:

Vid (k+1) =w*vid (k) +c1*r1* (Pid (k)-xid (k)) +c2*r2* (PGD (k)-xid (k))
Xid (k+1) = Xid (k) + Vid (k+1)

The red w is the parameter that we are going to discuss this time. Before W is unchanged (by default, 1), and now W is variable, W is called inertia weight, which reflects the ability of particles to inherit the previous velocity. Experience shows that a larger inertial weight is beneficial to global search, while a smaller inertial weight is more advantageous to local search. In order to better balance the global searching ability and local search ability of the algorithm, SHI.Y proposes the linear decreasing inertia weight (LDIW).
That is: w (k) = W_end + (w_start-w_end) * (tmax-k)/tmax. Where W_start is the initial inertia weight, w_end is the inertia weight of the iteration to the maximum number of times, K is the current iteration number, Tmax is the maximum number of iterations. In general, the algorithm performs best when w_start=0.9,w_end=0.4. As the iteration progresses, the inertia weights decrease from 0.9 to 0.4, and the large inertia weights in the early iterations make the algorithm maintain a strong global search capability. However, the smaller inertia weights in the later iterations are advantageous to the algorithm for a more accurate local search. Linear inertia weight, is only a kind of experience, the commonly used inertia weight also includes the following several.
(3) W (k) = W_start-(w_start-w_end) * (K/tmax) ^2
(4) w (k) = W_start + (w_start-w_end) * (2*k/tmax-(K/tmax) ^2)
(5) w (k) = w_end* (w_start/w_end) ^ (1/(1+c*k/tmax)), C is constant, such as taking 10.
The purpose of this example is to compare the effects of these 5 different W values on the PSO optimization. The comparison method is to take the value of each W, repeat the experiment several times (for example, 100 times), compare the average optimal solution size, the number of times to get into the suboptimal solution, and the number of approaches to the optimal solution. This can have an intuitive comparison of the pros and cons of the 5 methods.

The code is as follows:

/* * using C language to implement particle swarm optimization (PSO) Improved version * Reference from "MATLAB Intelligent algorithm 30 Case Analysis" * UPDATE:16/12/3 * main improvements are reflected in the choice of W above * The optimal nonlinear function for this example is * f (x, y) = sin (sqrt (x^2+y^2)) /(sqrt (x^2+y^2)) + exp ((cos (2*pi*x) +cos (2*pi*y))/2)-2.71289 * This function has many local maxima points, while the limit position is (0,0), the maximum value is obtained near (0,0) */#include < stdio.h> #include <stdlib.h> #include <math.h> #include <time.h> #define C1 1.49445//  The acceleration factor is generally based on a large number of experiments. # define C2 1.49445#define Maxgen 300//Iteration number # # repeat 100//Repeat Experiment # # # Sizepop 20//Population size # # Popmax 2//Individual max Value # Popmin-2//individual minimum value # define VMAX 0.5//Speed Max # # Vmin-0.5//Speed min # # Dim 2//Dimension of particle #def Ine W_start 0.9#define w_end 0.4#define PI 3.1415926//pi double Pop[sizepop][dim]; Defines the population array double V[sizepop][dim]; Defines the population velocity array double fitness[sizepop];  Define the Adaptive degree group of the population double Result[maxgen];  Defines the array that holds the optimal value for each iteration of the population double Pbest[sizepop][dim]; The position of the individual extremum double Gbest[dim]; The position of the group Extremum double Fitnesspbest[sizepop]; The value of individual extreme degree of fitness double fitnessgbest; Group Extremum Fitness value double Genbest[maxgen][dim]; Optimal value per generation particle//fitness function double func(Double * arr) {Double x = *arr;//x value Double y = * (arr+1);//y value Double fitness = sin (sqrt (x*x+y*y))/(sqrt (x*x+y*y)) + exp ((c    OS (2*pi*x) +cos (2*pi*y))/2)-2.71289;    return fitness;} Population initialization void Pop_init (void) {for (int. i=0;i<sizepop;i++) {for (int j=0;j<dim;j++) {PO P[I][J] = ((double) rand ())/rand_max-0.5); -Random number between 2 and 2 v[i][j] = (double) rand ())/rand_max-0.5; -0.5 to 0.5} Fitness[i] = func (Pop[i]); Calculate Fitness function Value}}//max () function defines double * MAX (double * Fit,int size) {int index = 0;//Initialize ordinal double max = *fit;//Initialize Maximum    The value is the first element of the array, static double best_fit_index[2];            for (int i=1;i<size;i++) {if (* (fit+i) > max) max = * (fit+i);    index = i;    } best_fit_index[0] = index;    BEST_FIT_INDEX[1] = max; return best_fit_index;}    Iterative optimization, the passed parameter is an integer, the value is 1 to 5, respectively represents 5 different methods of calculating w void pso_func (int n) {pop_init (); Double * BEST_FIT_INDEX; For storing the group extremum and its position (ordinal) Best_fit_index = Max (Fitness,sizepop);    To find the group extremum int index = (int) (*BEST_FIT_INDEX);    Group extremum position for (int i=0;i<dim;i++) {Gbest[i] = Pop[index][i]; }//individual extremum position for (int i=0;i<sizepop;i++) {for (int j=0;j<dim;j++) {Pbest[i][j] = PO        P[I][J];    }}//individual extremum fitness value for (int i=0;i<sizepop;i++) {Fitnesspbest[i] = fitness[i];    }//Group extreme Fitness value Double bestfitness = * (best_fit_index+1);    Fitnessgbest = bestfitness;            Iterative optimization for (int i=0;i<maxgen;i++) {for (int j=0;j<sizepop;j++) {//Speed update and particle update for (int k=0;k<dim;k++) {//speed update double rand1 = (double) rand ()/rand_max;//0                Random number from 1 to double rand2 = (double) rand ()/rand_max;                Double W;                Double Tmax = (double) Maxgen;                    Switch (n) {case 1:w = 1;           Case 2:             W = w_end + (w_start-w_end) * (tmax-i)/tmax;                    Case 3:w = W_start-(w_start-w_end) * (I/tmax) * (I/tmax);                    Case 4:w = W_start + (w_start-w_end) * (2*i/tmax-(I/tmax) * (I/tmax));                    Case 5:w = w_end* (Pow ((w_start/w_end), (1/(1+10*i/tmax)));                Default:w = 1;                } V[j][k] = W*v[j][k] + c1*rand1* (pbest[j][k]-pop[j][k]) + c2*rand2* (Gbest[k]-pop[j][k]);                if (V[j][k] > Vmax) v[j][k] = Vmax;                if (V[j][k] < Vmin) V[j][k] = Vmin;                Particle update pop[j][k] = Pop[j][k] + v[j][k];                if (Pop[j][k] > Popmax) pop[j][k] = Popmax;            if (Pop[j][k] < popmin) Pop[j][k] = popmin; } Fitness[j] = func (pop[j]); The fitness value of the new particle} for (int j=0;j<sizepop;j++) {//Individual extremum update if (Fitness[j] > Fitnesspbest[j]) {                for (int k=0;k<dim;k++) {Pbest[j][k] = pop[j][k];            } Fitnesspbest[j] = Fitness[j];                    }//Group extremum Update if (Fitness[j] > Fitnessgbest) {for (int k=0;k<dim;k++)                GBEST[K] = pop[j][k];            Fitnessgbest = Fitness[j];        }} for (int k=0;k<dim;k++) {Genbest[i][k] = gbest[k];//per-generation optimal value particle position record} Result[i] = fitnessgbest; The optimal value of each generation is recorded into the array}}//main function int main (void) {clock_t start,finish;//program start and end time start = clock ();//Start Timing srand ((unsign ed) Time (NULL)); Double best = 0;        The optimal solution for a repeating experiment for (int j=0;j<repeat;j++){Pso_func (i);//L type W parameter value double * Best_fit_index = max (Result,maxgen); Double best_result = * (best_fit_index+1);            Optimal solution if (Best_result > 0.95) near_best++;            if (best_result>best) best = Best_result;        Best_sum + = Best_result; } double average_best = best_sum/repeat;        Repeat experiment average optimal value printf ("W parameter%d method: \ n", i);    printf ("Repeated experiment%d times, each iteration%d times, close to the optimal solution of the number of experiments%d times, the optimal value is:%LF, the average optimal value is:%lf\n", repeat,maxgen,near_best,best,average_best); } finish = Clock (); End Time Double duration = (double) (Finish-start)/clocks_per_sec;    Program Run Time printf ("Program Runtime:%lf\n", duration); return 0;}

The results of the program run as follows:

From the experimental results, the 3rd kind of W, whether it is close to the optimal solution of the number of times, the optimal value of the size, or the average optimal value, are 5 of the best. The reasons for the explanation are as follows: through the expression of W, it can be seen that the early W changes are slower, the value is large, the global search ability of the algorithm is maintained, and the change of W change is faster, which greatly improves the local search ability of the algorithm, and obtains a good solution result.

In general, in most cases, no matter w is 5 kinds of which method, the results are very good close to the actual optimal solution, this shows that the PSO algorithm search optimization ability is very strong.

C language Implementation particle swarm optimization (PSO) Two

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.