Python programming implementation particle swarm algorithm (PSO) details, pythonpso

Source: Internet
Author: User

Python programming implementation particle swarm algorithm (PSO) details, pythonpso

1 Principle

The particle swarm algorithm is a kind of group intelligence, which is based on the research and simulation of the bird group's feeding behavior. Suppose there is food in only one place in the bird group for food, and all the birds cannot see the food (they do not know the specific location of the food ), however, you can smell the food (you can know where the food is located ). The best strategy is to search the areas closest to food in the birds with your own experience.

The use of particle swarm algorithms to solve practical problems is essentially the use of particle swarm algorithms to solve the maximum value of a function. Therefore, we need to first abstract the actual problem into a mathematical function, called a fitness function. In the particle swarm algorithm, every bird can be seen as a solution to the problem. Here we usually call birds a particle, and every particle has:

Location. You can understand the value of the independent variable of the function;
Experience, that is, the location closest to food that you have experienced;
Speed, which can be understood as the variable value;
Fitness, the distance from the food location, that is, the function value.

Particle swarm algorithm process

PSO Flowchart

Initialization. Initialize the particle according to the given number of particles, including the initial values:

Location: Random value in the solution space;
Experience: equal to the initial position;
Speed: 0;
Fitness: based on the position, bring the fitness function to obtain the fitness value.
Update. It consists of two parts:
Particle information: This includes updating the velocity and position of the particle based on the following formula, updating the fitness degree based on the fitness function, and then comparing the updated fitness degree with its own experience, if the new fitness is based on experience, the current position will be used to update the experience;

Speed update Formula

Location update Formula

In the above formula: I indicates the particle number; t indicates the moment, which is reflected in the number of iterations; w indicates the inertia weight, which is generally set to around 0.4; c indicates the learning factor, generally, the value is 2. Xpbest indicates the experience of particle I, that is, the optimal position of particle I. Xgbest indicates the position of the global optimal particle; r is a random value between 0 and 1.

Population Information: Compare the current fitness with the global Fitness at the optimal position. If the current fitness is better than the global fitness, the current particle will be used to replace the optimal group.

Determine the end condition. The ending condition includes the maximum number of iterations and the threshold of fitness.

2 code

The lab environment is python 2.7.11.

This code was originally used to solve the problem of one-dimensional Maximum Entropy image segmentation. Therefore, it is to solve the maximum value of the function. If you need to solve the minimum value, you can change all the greater than signs in the code to smaller than signs.

First, we need to solve the problem of particle storage. My first reaction is to use struct for storage, but python does not have the corresponding data structure. So I chose to use a class to represent the particle structure, an object of this class is a particle. The code above:

Class bird: "" speed: speed position: Location fit: Fitness lbestposition: best position experienced lbestfit: the best fitness value "def _ init _ (self, speed, position, fit, lBestPosition, lBestFit): self. speed = speed self. position = position self. fit = fit self. lBestFit = lBestPosition self. lBestPosition = lPestFit

The following is the main part of the particle swarm algorithm, which is encapsulated by a class. The Code is as follows:

Import randomclass PSO: "fitFunc: fitness function birdNum: Population Size w: inertia weight c1, c2: individual learning factor, social learning factor solutionSpace: solution space, list type: [minimum value, maximum value] "" def _ init _ (self, fitFunc, birdNum, w, c1, c2, solutionSpace): self. fitFunc = fitFunc self. w = w self. c1 = c1 self. c2 = c2 self. birds, self. best = self. initbirds (birdNum, solutionSpace) def initbirds (self, size, solutionSpace): birds = [] for I in range (size): position = random. uniform (solutionSpace [0], solutionSpace [1]) speed = 0 fit = self. fitFunc (position) birds. append (bird (speed, position, fit, position, fit) best = birds [0] for bird in birds: if bird. fit> best. fit: best = bird return birds, best def updateBirds (self): for bird in self. birds: # update speed bird. speed = self. w * bird. speed + self. c1 * random. random () * (bird. lBestPosition-bird. position) + self. c2 * random. random () * (self. best. position-bird. position) # update position bird. position = bird. position + bird. speed # match with the new fitness bird. fit = self. fitFunc (bird. position) # check whether it is necessary to update the empirical if bird. fit> bird. lBestFit: bird. lBestFit = bird. fit bird. lBestPosition = bird. position def solve (self, maxIter): # Only the maximum number of iterations is taken into account. If you need to consider the threshold, just add the judgment statement for I in range (maxIter): # update the particle self. updateBirds () for bird in self. birds: # Check whether global optimization if bird needs to be updated. fit> self. best. fit: self. best = bird

With the above Code, you only need to customize the fitness function fitFunc to solve the problem, but note that it is only applicable to solving one-dimensional problems.

Summary

The above is all the details about how to implement the particle swarm algorithm (PSO) through Python programming. I hope it will be helpful to you. Interested friends can continue to refer to this site: the Python algorithm outputs an array of 1-9 results in all the computing formula of 100, the Python memory management method and the garbage collection algorithm parsing, And the Python random generation of point code samples evenly distributed in the unit circle, if you have any questions, you can leave a message at any time. The editor will reply to you in a timely manner. Thank you for your support!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.