Multi-target particle swarm

Source: Internet
Author: User

The multi-objective particle swarm (MOPSO) algorithm is composed of Carlosa. Coello Coello In 2004, detailed reference 1. The aim is to apply particle swarm optimization (PSO), which can only be used on a single target, to multiple targets. We know that the original single-target PSO process is simple:

--Initialize the particle position (usually randomly generated evenly distributed)

--compute fitness VALUES (typically target function values-optimized objects)

--Initialize the historical optimal pbest for itself and find out the global optimal gbest

--update position and speed based on position and velocity formula

--Recalculation of fitness

--Update historical optimal pbest and global optimal gbest according to fitness degree

--convergence or the maximum number of iterations to exit the algorithm

The speed update formula is as follows:

The right side of the equation is composed of three parts. The first part is the inertia quantity, which is the vector of the last motion of the continuation particle, the second part is the individual cognition quantity, is the quantity which moves to the individual history optimal position, the third part is the social cognition quantity, is the particle to the global optimal position movement quantity.

With speed, the location update naturally comes out:

The above is an introduction to the multi-objective PSO algorithm. When applying to multiple targets, the problems that arise are as follows:

    1. How to choose Pbest. We know that for single-objective optimization, the choice of pbest, only a comparison to choose which is better. But for a multi-objective two-particle comparison, it is not possible to compare which is better. If each target of a particle is better, the particle is better. If some better, some worse, you can not strictly say which is better, which is worse.
    2. How to choose Gbest. We know that there is only one optimal individual for a single target in a population. And for multi-objective, the best individuals have a lot of. For PSO, each particle can only choose one as the best individual (the necktie). How to choose?

MOPSO The first problem is to randomly select one of them as a historical optimality when it is not possible to strictly compare which one is better. For the second problem, MOPSO in the optimal set (archive) to choose a leader based on the congestion level. Try to choose a less dense particle (the grid method is used here).

MOPSO uses an adaptive mesh method when choosing a leader and updating the archive (or, optionally, the Pareto Interim Optimal section), in detail, reference 2.

How to choose a tie-person?

MOPSO Select a particle to follow in the archive. How to choose? Based on the grid, suppose that the number of particles in each mesh, I represents the first few grids. The probability that the particles in the grid is chosen is that the more dense the particles, the lower the probability of selection. This is to ensure that unknown areas can be explored.

How do I archive it?

After the population update is complete, how is it archived? The MOPSO was screened by a tri-rounds.

First, the first screening is based on the dominant relationship, the inferior solution is removed, and the remainder is added to the archive.

Second, in the archive, according to the dominant relationship of the second round of screening, the inferior solution is removed, and the location of the archived particles in the grid is calculated.

Finally, if the archive quantity exceeds the archive threshold, filtering is based on the adaptive grid until the threshold limit is reached. Re-meshing the grid.

Refer

    1. Handling multiple objectives with particle swarm optimization
    2. Approximating the non dominated front using the Pareto archivedevolution strategy
    3. http://blog.csdn.net/ture_2010/article/details/18180183

 

Multi-target particle swarm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.