PSO Particle Swarm Algorithm

Source: Internet
Author: User

Particle Swarm Optimization (PSO), also known as particle swarm optimization, is composed of J. kennedy and R. c. eberhart is an evolutionary computing technology developed in 1995. It comes from a simulation of a simplified social model. Among them, the "swarm" is derived from the five basic principles of group intelligence proposed by the m. M. millonas when developing models applied to artificial life. Particle is a compromise, because you need to describe members in a group as they are of no quality and no size, as well as their speed and acceleration.

The PSO algorithm was initially designed to simulate the beautiful and unpredictable movements of birds in a graphical manner. By observing the social behavior of animals, it is found that the social sharing of information in groups provides an evolutionary advantage and serves as the basis for developing algorithms. The first version of the PSO is formed by adding the nearest neighbor speed match, taking into account multi-dimensional search and distance acceleration. Then we introduced the inertial weight W to better control the development and exploration, forming the standard version.

Principle:

The PSO algorithm is based on a group. Based on the adaptability to the environment, the individual in the group is moved to a good area. However, it does not use evolutionary operators for individuals, but regards each individual as a non-volumetric particle (point) in the D-dimension search space, flying at a certain speed in the search space, this speed is dynamically adjusted based on its own flight experience and companion flight experience. The first particle is expressed as xi = (xi1, xi2 ,..., Xid), the best position (with the best adaptive value) it has experienced is recorded as Pi = (Pi1, Pi2 ,..., PID), also known as pbest. The index number at the best position that all particles in the group have experienced is represented by the symbol G, that is, PG, also known as gbest. The speed of particle I is Vi = (vi1, vi2 ,..., Vid. For each generation, its D dimension (1 ≤ d) varies according to the following equation:

Vid = W * vid + C1 * rand () * (PID-Xid) + C2 * rand () * (PGD-Xid) (1A)

Xid = Xid + vid (1B)

Where W is inertia weight, c1 and c2 are acceleration constants (acceleration constants), Rand () and rand () it is two random values that change in the range of [0, 1.

In addition, the speed Vi of particles is limited by a maximum speed vmax. If the current acceleration of particles leads to the speed of VID in a dimension exceeding the maximum speed of Vmax and D in the dimension, the speed of the dimension is limited to the maximum speed of Vmax in the dimension, d.

For formula (1A), the first part is the inertia of the particle's previous behavior. The second part is the "Cognition" part, which indicates the thinking of the particle itself. The third part is the "social" part, information sharing and cooperation between particles.

The "Cognition" part can be explained by Thorndike's law of effect (effect), that is, a reinforced random behavior is more likely to appear in the future. The behavior here is "Cognition", and it is assumed that the correct knowledge is enhanced, such a model assumes that particles are motivated to reduce the error.

The "Society" section can be explained by vicarous reinforcement of Bandura. According to the expectation of this theory, when the observer observes that a model is reinforcing a certain line, it will increase its probability of implementing this behavior. That is, the cognition of the particles themselves will be imitated by other particles.

The PSO algorithm uses the following psychological hypothesis: In the process of seeking consensus, individuals often remember their own beliefs and consider their colleagues' beliefs. When they perceive a strong belief in their colleagues, they will make adaptive adjustments.

The algorithm flow of the standard PSO is as follows:

A) initialize a group of particles (the group size is m), including random positions and speeds;

B). Evaluate the fitness of each particle;

C) Compare the adaptive value of each particle with the pbest, which is the best position it has experienced. If it is better, use it as the best position pbest;

D) Compare the adaptive value of each particle with the best position gbest experienced by the whole site. If it is better, reset the index number of gbest;

E). The velocity and position of particles are changed according to equation (1;

F). Return to B if the end condition is not met (usually a good enough adaptive value or a preset maximum algebra gmax is reached ).

Algorithm parameters

The PSO parameters include: group scale M, inertial weight W, acceleration constant C1 and C2, maximum speed vmax, and maximum algebra gmax.

Vmax determines the resolution (or accuracy) of the region between the current position and the best position ). If the Vmax is too high, particles may fly over to solve the problem. If the Vmax is too small, particles cannot be explored enough, leading to local optimization. This restriction has three purposes: preventing computing overflow, implementing manual learning and changing attitudes, and determining the granularity of question space search.

The inertia weight W enables particles to maintain the inertia of motion, enable them to expand the search space trend, and have the ability to explore new areas.

The acceleration constants C1 and C2 represent the weights of the statistical acceleration items that push each particle to the pbest and gbest positions. A low value allows particles to wander outside the target area before being pulled back, and a high value causes the particles to suddenly rush to or cross the target area.

If there is no second part, that is, C1 = C2 = 0, the particles will fly at the current speed until they reach the boundary. Because it can only search for a limited area, it is difficult to find a good solution.

If there is no first part, that is, W = 0, the speed depends only on the current position of the particles and their best position in history, pbest and gbest. The speed itself has no memory. Assuming that a particle is at the best position in the world, it remains static. Other particles fly to the weighted center of pbest and gbest. Under this condition, the particle swarm compresses statistics to the current optimal position, more like a local algorithm.

After the first part is added, the particle has the tendency to expand the search space, that is, the first part has the global search capability. This also enables W to adjust the global and local search capacities of algorithms for different search problems.

If there is no second part, that is, C1 = 0, then the particles have no cognitive ability, that is, the model of "only society-only. Under the interaction of particles, the ability to reach new search space. Its convergence speed is faster than that of the standard version, but it is easier to fall into local advantages than the standard version for complex problems.

If there is no third part, that is, C2 = 0, there is no social information sharing between particles, that is, the model of "only cognition-only. Because there is no interaction between individuals, a group with a size of M is equivalent to the operation of m individual particles. Therefore, the probability of a solution is very small.

Link

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.