Particle Swarm Optimization (Particle Swarm) -- Introduction

Source: Internet
Author: User

Explanation of terms

Particle Swarm Optimization: Particle Swarm Optimization Theory

Stochastic optimizationtechnique: random optimization technology

Evolutionary
Computation techniques: Computer computing is used to simulate the evolutionary process of biology and evolutionary computing technology.

Genetic
Algorithms: Genetic Algorithm

The
Problem Space: The space for resolving the problem.

Fitness: fitness value

Pbest: personal
Best, individual optimal value

Lbest: Local
Best, local optimal value

Gbest: Global
Best, global optimal value

Text

Particle Swarm Optimization (PSO) is a population based stochastic optimization technique developed by dr. Eberhart and dr. Kennedy in 1995, using red by social behavior of bird flocking or fish schooling.

Particle Swarm Optimization Theory (Particle Swarm) was first published in IEEE magazine in 1995 by Dr. Eberhart and Kennedy. It is a popular technology based on random optimization. It imitates the social behavior of birds or fish groups (such as the prey process ).

PSO shares extends similarities with evolutionary computation techniques such as genetic algorithms (GA ). the system is initialized with a population of random solutions and searches for optima by updating generations. however, unlike Ga, PSO has no evolution
Operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles.

To some extent, particle swarm is very similar to evolutionary computing technology (such as genetic algorithms. The PSO is initialized as a group of random particles (random solution ). Then find the optimal solution through iteration. In each iteration, particles update themselves by tracking two "extreme values" (pbest and lbest, Optima mentioned here. However, unlike genetic algorithms. Particle swarm has no crossover or mutation. In the particle group, the "particle" itself is a potential solution. By following the current best particle (there will be a variety of optimal particles in this article, pay attention to the difference in the process) to search for the space for solving the problem.

Each particle keeps track of its coordinates in the problem space which are associated with the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is calledPbest. Another "best" value that is tracked
The particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle. This location is calledLbest. When Aparticle takes all the population as its topological neighbors, the best value is a global best
And is calledGbest.

Each particle (randomly initialized in the above article) tracks the optimal solution that he can reach in the solution space (expressed by the fitness value, and always better updated) to form your own route. (That is, in the solution space, the particle adjusts itself based on a better fitness value, and finally forms its own trajectory) (after its own trajectory, it will get a pbest) this value is pbest. Another "Optimal Solution" is produced by comparing any particle with its neighbors. This solution (the location here is the solution, which is also a particle. later it will show that the particle itself has two attributes: one is the position, that is, its own solution, and the other is the speed, if it changes its position and does not change, how can it evolve? :)) It is called lbest (that is, some populations are used as the neighboring neighbor of a particle for optimization. This is optional ). When all particles in the selected particle population (all
The population is the initial population. Of course, sometimes the population is too small to fall into the local optimal solution.) After being traversed, a real global optimal solution is obtained from all lbest instances, it is called gbest.

The particle swarm optimization Concept consists of, at each time step, changing the velocity of (ACCELERATING) each particle toward its PBestAndLbestLocations (local version of PSO). acceleration is weighted by a random term, with separate
Random numbers being generated for acceleration towardPbestAndLbestLocations.

The concept of particle swarm optimization includes (each iteration process): Changing the speed at which particle swarm redirects to pbest and lbest. The two speeds are randomly generated and independent of each other. (It is easy to see in the formula. For the explanation of the formula, another explanation is provided)

In past several years, PSO has been successfully applied in advance research and application areas. It is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods.

In the past few years, particle swarm has been successfully applied to many research and application fields. These studies show that particle swarm is faster and more cost-effective than other optimization technologies.

Another reason that PSO is attractive is that there are few parameters to adjust. one version, with slight variations, works well in a wide variety of applications. particle Swarm Optimization has been used for approaches that can be used within ss a wide range
Of applications, as well as for specific applications focused on a specific requirement.

Another factor that makes particle swarm so attractive is that few parameters need to be adjusted. A model formula can be used in a wide range of fields as long as few parameters are changed. Of course, for some special requirements in some special fields, particle swarm is also applicable, it is simply "economically applicable male". Do you want to try it :)~~~

A standard model formula is attached:

In each iteration, particles update their own speed and position through individual extreme values and population extreme values, that is

W indicates the inertial weight, D = ..., D; I = 1, 2 ..., N; k is the number of current iterations; VID is the particle velocity; C1 and C2 are non-negative constants, known as acceleration factors, usually set to C1 = C2 = 2; random Numbers R1 and R2 are distributed in the range [0, 1. To prevent the blind search of particles, it is generally recommended that the position and speed be limited to a certain range [-xmax, xmax], [-Vmax, Vmax].

Http://www.swarmintelligence.org/

All rights reserved. Xiaohui Hu 2006

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.