Introduction to particle swarm optimization algorithm

Source: Internet
Author: User
Tags flock

Learn maths well.

I. Source of the problem

Introduced by friends, took a job, is to do PSO and its optimization, just like my tutor also study this, has been in contact with the new school, then I picked up .... Earn some living expenses.

You are welcome to contact me to do the algorithm class project, qq:791909235,tel:13137910179.

Two. Background information2.1 Artificial Life

Artificial Life: The study of people with certain basic characteristics of life Industrial Systems. Includes two things:
1, study how to use computational technology to study biological phenomena;
2, research how to use biotechnology to study computational problems.
our focus is on the 2nd. There have been many computational techniques derived from biological phenomena, such as the Divine meridian and genetic algorithms. Now we discuss another biological system---social system: the interaction between a community of simple individuals and the environment and individuals .

2.2 Swarm Intelligence

simulation system uses local information to To produce unpredictable group behavior.

Millonas in the development of Artificial Life Algorithm (1994), put forward the concept of swarm intelligence and put forward five points principle:
1, the principle of proximity: groups should be able to achieve a simple space-time calculation;
2, the principle of high-quality: groups can respond to environmental factors;
3, change the corresponding principle: groups should not confine their activities to a narrow small range;
4. The principle of Stability: Groups should not change their models with the environment every time type;
5, the principle of adaptability: the model of the group should be calculated at the cost of the time change.

2.3 Simulation Group

Simulation of bird swarm behavior: Reynolds, Heppner and Grenader are proposed to simulate the behavior of birds. They found that the flock would suddenly change direction, scatter or gather in the course of the march. Then there must be some potential ability or rule that guarantees these synchronous behaviors. These scientists believe that this behavior is based on the unpredictable social behavior of birds in the group dynamics. In these early models, it was only dependent on individual spacing, that is, synchronization is the result of an effort to maintain optimal distances between individuals in the flock.

A study of fish behavior: E.o.wilson, a biologist, studied fish stocks. Proposed: "At least in theory, individual members of a fish can benefit from the discovery and previous experience of other individuals in the group in the process of finding food, which outweighs the benefits of competition between individuals, regardless of the unpredictable dispersion of food resources at any time." "This shows that social sharing of information between the same organisms can bring benefits. This is the basis of PSO. 

three. Algorithm Introduction

The basic idea of particle swarm optimization algorithm is to find the optimal solution through collaboration and information sharing among the individuals in the group.
The advantage of PSO is that it is simple and easy to implement and has no adjustment of many parameters. It has been widely used in the field of function optimization, neural network training, fuzzy system control and other genetic algorithms.

3.1 questions raised

Imagine a scene in which a flock of birds searches for food randomly. There is only one piece of food in this area, and all the birds don't know where the food is. But they know how far they are from their current location to the food. So what's the best strategy for finding food? The simplest and most effective way is to search the area around the bird that is currently closest to the food.

3.2 Problem Abstraction

Birds are abstracted as particles without mass and volume (dots), and extend to n-dimensional space, where particle i is represented as Vector xi= (X1,X2,...,XN) in n-dimensional space, and the flight speed is expressed as vector vi= (V1,V2,...,VN). Each particle has an adaptive value (fitness value) determined by the target function, and knows the best position (pbest) and now position XI that it has found so far. This can be seen as a particle's own flight experience. In addition, each particle also knows the best location (gbest) of all particles found in the entire population so far (Gbest is the best value in Pbest). This can be seen as the experience of a particle companion. Particles determine the next movement through their own experience and the best experience of their peers.

3.3 Algorithm Description

The PSO is initialized to a group of random particles (random solution). Then the optimal solution is found by iteration. In each iteration, the particles update themselves by tracking two "extrema" (pbest,gbest).

After finding the two best values, the particles update their speed and position by using the following formula.

(I remember that VI needs to be multiplied by the inertia weight).

I=1,2,...,m,m is the total number of particles in the group; Vi is the velocity of particles, pbest and gbest are defined as before, Rand () is a random number between (0, 1), and Xi is the current position of the particle. C1 and c2 are learning factors, usually taking c1= c2=2 in each dimension, particles have a maximum limiting speed of Vmax, and if a dimension is faster than the set Vmax, the speed of this dimension is limited to Vmax. (Vmax >0), based on the above two formulas, forms the standard form of later PSO.

3.4 Algorithm Optimization

in 1998, Shi and others published a paper "a modified particle swarm optimizer" in the International Conference on Evolutionary Computation, which corrected the preceding formula. The inertia weighting factor is introduced. The value is large, the global optimization ability is strong, the local optimization ability is weak, and the value is small instead.

Initially, Shi will be taken as a constant, and later experiments find that the dynamic can obtain better results than fixed values. The dynamic can change linearly in the PSO search process, or it can be changed dynamically according to some measure function of PSO performance. At present, the proposed linear diminishing weights (linearly decreasing weight, LDW) strategy are adopted by the Shi.

3.4 Standard PSO algorithm flow

The flow of the standard PSO algorithm:

STEP1: Initialize a group of particles (group size m), including random positions and velocities;

STEP2: Evaluate the fitness of each particle;

STEP3: For each particle, its adaptive value and its best position pbest as a comparison, if it is better, it is the best position of the current pbest;

STEP4: For each particle, its adaptive value and its best position gbest as a comparison, if it is better, it is the best position of the current gbest;

STEP5: Adjust the particle velocity and position according to (2), (3) type;

STEP6: Turn STEP2 If the end condition is not reached.

Iterative termination conditions are generally selected as the maximum number of iterations based on specific problems GK or (and) the optimal location of particle swarm search to date satisfies the predetermined minimum adaptive threshold.

3.5 Parametric Analysis

The Pbest and gbest in the equation represent the local and global optimal position of the particle swarm respectively, and when c1=0, the particle has no cognitive ability and becomes only a social model (SOCIAL-ONLY):

is called the global PSO algorithm. The particle has the ability to expand the search space, has a faster convergence speed, but because of the lack of local search, the complex problem is more prone to local optimization than standard PSO.

When c2=0, there is no social information between the particles, and the model becomes only the cognitive (cognition-only) Model:

is called local PSO algorithm. Because there is no communication between the individuals, the whole group is equal to the blind random search of multiple particles, the convergence speed is slow, so the probability of getting the optimal solution is small.

Group size m generally take 20~40, for difficult or specific categories of problems can be taken to 100~200.

Maximum speed Vmax determines the resolution (or precision) of the area between the current position and the best position. If it is too fast, the particle is likely to cross a minimum, and if it is too slow, the particle cannot be explored enough outside of the local minimum to fall into the local extremum region. This limitation can be used to prevent the computation overflow and determine the granularity of the problem space search.

Weight factors include inertial factors and learning factors C1 and c2. The particles maintain the inertia of motion, which has the tendency to expand the search space and the ability to explore new areas. C1 and C2 represent weights for statistical accelerators that push each particle to the pbest and gbest locations. A lower value allows the particles to hover outside the target area before being pulled back, and higher values cause the particles to suddenly rush or cross the target area.

four. Optimizing PSO4.1 Introduction of convergence factor, not inertia weight

Usually set c1=c2=2. The experiments of Suganthan show that the C1 and C2 can obtain a better solution when they are constant, but not necessarily equal to 2. The convergence factor (constriction factor) k is introduced to ensure the convergence of Clerc.

Usually takes 4.1, then k=0.729. The experiment shows that the PSO using convergence factor has faster convergence rate than the PSO algorithm using inertial weights. In fact, as long as the appropriate selection and C1, C2, the two algorithms are the same. So the PSO using convergence factor can be regarded as the special case of using inertial weight PSO. Proper selection of the parameter values of the algorithm can improve the performance of the algorithm.

4.2 Discrete binary particle swarm

The basic PSO is used for real-valued continuous space, but many practical problems are combinatorial optimization problems, so the discrete form of PSO is proposed. The speed and position are more modern:

4.3 PSO and GA comparison

Commonality: (1) All belong to bionic algorithm. (2) is a global optimization method. (3) All belong to the random search algorithm. (4) both imply parallelism. (5) Search according to the individual's adaptation information, so it is not restricted by the function constraints, such as continuity, the conductivity and so on. (6) For high-dimensional complex problems, often encounter premature convergence and poor convergence performance shortcomings, can not guarantee convergence to the most advantage.

Differences: (1) PSO has the knowledge of memory, good solution to all particles are preserved, while GA, previous knowledge changed with the change of the population. (2) The particles in PSO only share information through the current search to the most advantageous, so it is a kind of single-share item information mechanism to a large extent. In GA, the chromosomes share information with each other, which makes the whole population move toward the optimal region. (3) GA coding technology and genetic manipulation is relatively simple, and PSO with respect to GA, there is no crossover and mutation operation, the particle is only updated by the internal speed, so the principle is simpler, less parameters, easier to achieve.

GA can be used to study three aspects of NN: network connection weight, network structure and learning algorithm. The advantage is that it can handle problems that traditional methods cannot handle, such as non-conductive node transfer functions or no gradient information. Cons: Performance is not particularly good on some issues; the coding of network weights and the selection of genetic operators are sometimes more troublesome. PSO has been used to train neural networks. The research shows that PSO is a potential neural network algorithm. Faster and with better results. And there is no genetic algorithm to meet the problem.

Five. PSO Implementation

CSDN link (contains the basic PSO and 12 optimization PSO algorithms, absolutely can be used).

Reference: West Shong Teacher Courseware

CSDN link.

Introduction to particle swarm optimization algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.