MOPSO Multi-objective example swarm optimization algorithm

Source: Internet
Author: User

In recent years, the multi-objective optimization technique based on heuristic has been greatly developed, and the research shows that this technique is more practical and efficient than the classical method. The representative multi-objective optimization algorithms mainly include NSGA, Nsga-ii, SPEA, SPEA2, Paes and Pesa. Particle swarm optimization (PSO) algorithm is an evolutionary technology based on swarm intelligence, which simulates social behavior, with its unique search mechanism, excellent convergence performance and convenient computer realization, has been widely used in engineering optimization field, and multi-objective PSO (MOPSO) algorithm has been applied to different optimization fields [9~11] , but there are some shortcomings such as high computational complexity, low universality and poor convergence.

The multi-objective particle swarm (MOPSO) algorithm is composed of Carlosa. Coello Coello In 2004, detailed reference 1. The aim is to apply particle swarm optimization (PSO), which can only be used on a single target, to multiple targets. We know that the original single-target PSO process is simple:

--Initialize the particle position (usually randomly generated evenly distributed)

--compute fitness VALUES (typically target function values-optimized objects)

--Initialize the historical optimal pbest for itself and find out the global optimal gbest

--update position and speed based on position and velocity formula

--Recalculation of fitness

--Update historical optimal pbest and global optimal gbest according to fitness degree

--convergence or the maximum number of iterations to exit the algorithm

The speed update formula is as follows:

The right side of the equation is composed of three parts. The first part is the inertia quantity, which is the vector of the last motion of the continuation particle, the second part is the individual cognition quantity, is the quantity which moves to the individual history optimal position, the third part is the social cognition quantity, is the particle to the global optimal position movement quantity.

With speed, the location update naturally comes out:

The above is an introduction to the multi-objective PSO algorithm. When applying to multiple targets, the problems that arise are as follows:

    1. How to choose Pbest. We know that for single-objective optimization, the choice of pbest, only a comparison to choose which is better. But for a multi-objective two-particle comparison, it is not possible to compare which is better. If each target of a particle is better, the particle is better. If some better, some worse, you can not strictly say which is better, which is worse.
    2. How to choose Gbest. We know that there is only one optimal individual for a single target in a population. And for multi-objective, the best individuals have a lot of. For PSO, each particle can only choose one as the best individual (the necktie). How to choose?

MOPSO The first problem is to randomly select one of them as a historical optimality when it is not possible to strictly compare which one is better. For the second problem, MOPSO in the optimal set (archive) to choose a leader based on the congestion level. Try to choose a less dense particle (the grid method is used here). MOPSO uses an adaptive mesh method when choosing a leader and updating the archive (or, optionally, the Pareto Interim Optimal section), in detail, reference 4.

Here are the steps for the MOPSO algorithm:

(1) Initialize Group and archive set

The initial population P1 is given by assigning initial value to the parameters, and the non-inferior solution in P1 is copied to archive concentration to get A1. The current evolutionary algebra is T, and the content of (2) ~ (4) is completed when T is less than the total evolutionary algebra.
(2) Evolution produces next-generation groups
The current evolution of the particle j, at J less than the size of the population completed 1) to the content.

1) Calculate density information for archive concentrated particles
The target space is divided into small areas such as meshes, and the number of particles contained in each region is used as the density information of the particles. The larger the number of particles in the grid that the particles are in, the greater the density value, and the smaller the inverse. Taking the two-dimensional target space minimization optimization problem as an example, the specific implementation process of the density information estimation algorithm is as follows:

where G = Mxm is the number of meshes to be divided into the target space, Int is the rounding function, and fi 1 and fi 2 are the target function values of the particle i.

2) for the particle pj,t in the population, the quality of the gj,tgj,t particles in the gbest particle determines the convergence performance and the diversity of the non-inferior solution set, and the selection is based on the density information of the archive concentration particles. Specifically, the lower the density value of the particles in the archive, the greater the probability of selection, the smaller the inverse, and the better the search potential of the particles in the archive concentration than the number of particles in the population, the better the search potential, the stronger the searching potentiality, and the weaker the more. The specific implementation of the algorithm is as follows:

Among them, | At| represents the number of particles at which the at contains, and AJ is used to store members that are superior to the particle pj,t at, and the smallest particles in AJ are stored in Gj, and Density (AK) calculates the density estimate of the particle Ak; Rand{gj,t} represents a randomly selected member from Gj,t.
3) the particle position and velocity in the population are updated to search for the optimal solution under the guidance of Gbest and pbest, and the implementation of the algorithm is as follows:

Wherein, pk,t+1 represents the K-particle in the pt+1; symbol??? Indicates that there is no precedence relationship between the two vectors. When the archive set is empty, the non-inferior solution in the pt+1 is copied directly to the archive concentration, and when the archive set is not empty, as long as the particles in the pt+1 are superior to or independent of a particle in the archive set, the particles are inserted into the archive set.
(4) Truncation operation for Archive set
When the number of particles in the Archive concentration exceeds the specified size, redundant individuals need to be removed to maintain a stable Archive set size. For grid k with more than 1 particles, press (1) to calculate the number of particles in the grid to be deleted, and then randomly delete the PN particles in grid K.

Where Grid[k] represents the number of particles contained in the grid K.
(5) Output particle information in archive concentration

Reference documents:

1. MOPSO algorithm and its application in reservoir optimal dispatching Jie, Zhou Jianzhong, Fang Still, Zhong Jianwei Computer Engineering

2, the understanding of multi-target particle swarm optimization algorithm MOPSO http://blog.csdn.net/ture_2010/article/details/18180183

3, handling multiple objectives with particle swarm optimization

    • 4, approximating the non dominated front using the Pareto archived evolution strategy

MOPSO Multi-objective example swarm optimization algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.