Simulated annealing algorithm

Source: Internet
Author: User

The well-known simulated annealing algorithm, which is based on Monte Carlo thought design, is an approximate solution to the optimization Problem.

A little history-if you're not interested, you can skip

in 1953, American physicist N.metropolis and colleagues published articles that studied complex systems and calculated their energy distributions using Monte Carlo simulations to calculate the energy distribution of molecules in a multi-molecule system. This is equivalent to the beginning of the problem discussed in this article, in fact, a noun commonly referred to in simulated annealing is the Metropolis criterion, which we will introduce later.


American IBM physicist s.kirkpatrick, C. D. Gelatt and M. P. Vecchi published an influential article in Science in 1983: optimizing with simulated annealing (optimization by Simulated Annealing) ". They borrowed metropolis and others to explore a rotating glass system (spin glass systems), discovering the energy of its physical system and some combinatorial optimality (combinatorial Optimization) problem (the Famous traveling salesman problem tsp is a representative example) the cost function is quite similar: seeking the lowest cost is like seeking the lowest energy. thus, they developed a set of algorithms based on the Metropolis method, and used it to solve combinatorial problems and seek the optimal Solution.

Almost at the same time, European physicist V.carny also published almost the same results, but they were found independently, but Carny "bad luck", when no one noticed his masterpiece, perhaps can say, "science" magazine marketing the world, "exposure" is very high, good reputation, And Carny in another very small circulation of the special academic journal "j.opt.theory Appl" published its results and thus did not attract due attention.

Kirkpatrick and others were inspired by the Monte Carlo simulation of Metropolis and invented the term "simulated annealing" because it was similar to the annealing process of Objects. Finding the optimal solution (the most value) of the problem is similar to finding the lowest energy of the System. therefore, when the system cools down, the energy decreases, and the solution of the problem is also "down" to the maximum value in the same sense.

first, What is annealing--the origin of physics

In thermodynamics, the phenomenon of annealing (annealing) refers to the physical phenomenon of gradual cooling of the object, the lower the temperature, the lower the energy state of the object, and the lower the liquid begins to condense and crystallize, the lowest energy state of the system in the crystalline State. In slow cooling (i.e., annealing), nature can "find" the lowest energy state: Crystallization. however, If the process is too fast, rapid cooling (also known as "quenching",quenching ") will result in non-crystalline form that is not the lowest energy State.

As shown, first (left) the object is in a non-crystalline State. We warmed the solids to a sufficiently high (middle) and cooled them slowly, and then annealed them (right). When heating, solid particles with temperature rise into disorder, internal energy increases, and slowly cooling particles gradually orderly, at each temperature to achieve equilibrium, finally at room temperature to reach the ground state, internal energy is minimized (at this time the object is presented in Crystal form).

It seems that nature knows the slow work out deliberately: slowly cooling, so that the object molecules at each temperature, can have enough time to find the settling position, then gradually, to the end can get the lowest energy state, The system is the most Stable.

second, simulated annealing (simulate anneal)

If You're still dizzy with the physical meaning of annealing, it doesn't matter that we have a much simpler way of understanding it. Imagine if we now have one of these functions, now we want to find the (global) optimal solution of the Function. If the greedy strategy is used, then a heuristic is started from point a, and if the function value continues to decrease, then the heuristic process Continues. And when we get to point b, it's clear that our quest is over (because in any direction, the result will only get bigger). finally, we can only find a partial final Solution B.

Simulated annealing is actually a greedy algorithm, but its search process introduces random Factors. The simulated annealing algorithm takes a certain probability to accept a solution that is worse than the current solution, so it is possible to jump out of the local optimal solution to achieve the global optimal Solution. For example, the simulated annealing algorithm will continue to move to the right with a certain probability after searching for the local optimal solution B. Perhaps after a few times this is not the local optimal move will reach the peak point between B and c, and then jumped out of the local minimum B.

According to the Metropolis criterion, the probability that a particle tends to be balanced at a temperature T is exp (-δe/(kT)), where E is the internal energy of the temperature t, and ΔE is the change number and K is the Boltzmann Constant. Metropolis guidelines are often expressed as


The metropolis guideline shows that at a temperature of t, there is a probability that the energy difference for the De's cooling is P (de), expressed as: p (de) = exp (de/(kT)). where k is a constant, exp represents a natural exponent, and de<0. So p and T are positively correlated. The formula says: the higher the temperature, the greater the probability of the cooling of the de by a single energy difference; the lower the temperature, the less the probability of cooling. And since DE is always less than 0 (because the annealing process is the process of gradual temperature drop), so de/kt < 0, so the function value range of P (dE) is (0,1). As the temperature T decreases, the P (dE) gradually decreases.

We see the movement of a poor solution at once as a temperature-hopping process, and we accept such movement with probability P (dE). In other words, in the simulation of combinatorial optimization problem with solid annealing, the internal energy E is simulated as the objective function value f, and the temperature T evolves into the control parameter T, which is the simulated annealing algorithm for solving combinatorial optimization problem: The initial solution I and the control parameter initial value t at the beginning, the iteration of "generating new solution → Calculating the difference of the target function → accepting or discarding" and gradually attenuation T value, the current solution at the end of the algorithm is the approximate optimal solution, which is a heuristic random search process based on Monte Carlo iterative Method. The annealing process is controlled by the cooling schedule (cooling Schedule), including the initial value of the control parameters T and its attenuation factor δT , the number of iterations per T value and the stop condition S.

To sum up is:

    • If f(y (i+1)) <= f(y (i)) (I.E. A better solution after moving), the movement is always accepted;
    • If f(y (i+1)) > f(y (i)) (that is, The solution after the move is worse than the current solution), then the probability is accepted to move, and this is gradually reduced over time (gradually decreasing to tend to stabilize) equivalent to When moving from B to a small crest between bc, the probability of each shift to the right (that is, to accept a worse value) is gradually decreasing. If the slope is particularly long, then it is likely that we will not be able to turn over the slope at the End. If it is not too long, it will most likely turn over it, depending on the setting of the attenuation T Value.

There is an interesting metaphor for the common greedy algorithm and simulated annealing:

      • Common greedy algorithm: the rabbit jumps toward the lower place than NOW. It found the lowest valley not far away. But this valley is not necessarily the Lowest. This is the ordinary greedy algorithm, it can not guarantee that the local optimal value is the global optimal Value.
      • Simulated annealing: the rabbit was Drunk. It jumped randomly for a long time. During this period, it may go low or it may step into the Ground. however, it gradually woke up and jumped in the lowest direction. This is simulated annealing.

An example is programmed to demonstrate the execution of simulated annealing. In particular, The example we have adopted here is the famous "travel quotient problem" (tsp,traveling salesman problem), which is an instantiation of the Hamilton Loop and one of the first NP problems Proposed.

The TSP is one of the most commonly used to explain the use of simulated annealing, because this problem is more famous, we do not repeat here, the following directly gives the C + + implementation of the Code:

#include <iostream>#include<string.h>#include<stdlib.h>#include<algorithm>#include<stdio.h>#include<time.h>#include<math.h>#defineN 30//Number of cities#defineT 3000//Initial temperature#defineEPS 1e-8//Termination Temperature#defineDELTA 0.98//Temperature Decay Rate#defineLIMIT 1000//probability selection Upper limit#defineOloop 20//number of external cycles#defineILOOP 100//number of internal loopsusing namespacestd;//define a route structure bodystructpath{intcitys[n]; Doublelen;};//define City Point coordinatesstructpoint{Doublex, y;};        Path bestpath; //record the optimal pathPoint p[n];//coordinates for each cityDoublew[n][n];//22 path length between citiesintncase;//Number of testsDoubleDist (point A, point B) {returnSQRT ((a.x-b.x) * (a.x-b.x) + (a.y-b.y) * (a.y-b.y));}voidGetdist (point p[],intN) {     for(inti =0; I < n; i++)         for(intj = i +1; J < n; J + +) w[i][j]= w[j][i] =Dist (p[i], p[j]);}voidInput (point p[],int&N) {SCANF ("%d", &n);  for(inti =0; I < n; i++) scanf ("%LF%LF", &p[i].x, &p[i].y);}voidInit (intN) {ncase=0; Bestpath.len=0;  for(inti =0; I < n; i++) {bestpath.citys[i]=i; if(i! = n-1) {printf ("%d--->", i); Bestpath.len+ = W[i][i +1]; }        Elseprintf ("%d\n", i); } printf ("\ninit path length is:%.3lf\n", bestpath.len); printf ("-----------------------------------\ N");}voidPrint (Path t,intN) {printf ("Path is:");  for(inti =0; I < n; i++)    {        if(i! = n-1) printf ("%d-->", t.citys[i]); Elseprintf ("%d\n", t.citys[i]); } printf ("\nthe path length is:%.3lf\n", t.len); printf ("-----------------------------------\ N");} Path GetNext (path p,intN) {Path ans=p; intx = (int) (n * (rand ()/(rand_max +1.0))); inty = (int) (n * (rand ()/(rand_max +1.0)));  while(x = =Y) {x= (int) (n * (rand ()/(rand_max +1.0))); Y= (int) (n * (rand ()/(rand_max +1.0)));    } Swap (ans.citys[x], ans.citys[y]); Ans.len=0;  for(inti =0; I < n-1; i++) Ans.len+ = W[ans.citys[i]][ans.citys[i +1]]; cout<<"ncase ="<< Ncase <<endl;    Print (ans, n); Ncase++; returnans;}voidSA (intN) {    Doublet =T;    Srand ((unsigned) (time (NULL))); Path Curpath=bestpath; Path NewPath=bestpath; intp_l =0; intP_f =0;  while(1)//external loop, Main update parameter t, simulated annealing process    {         for(inti =0; I < ILOOP; I++)//internal cycle, looking for optimal values at a certain temperature{newpath=GetNext (curpath, n); DoubleDE = newpath.len-curpath.len; if(dE <0)//If you find a more optimal value, update it directly{curpath=newpath; p_l=0; P_f=0; }            Else            {                DoubleRD = Rand ()/(rand_max +1.0); //If you find a solution that is worse than the current one, accept the solution at a certain probability, and the probability will become smaller                if(exp (de/t) > Rd && Exp (de/t) <1) Curpath=newpath; p_l++; }            if(p_l >LIMIT) {p_f++;  break; }        }        if(curpath.len <Bestpath.len) Bestpath=curpath; if(p_f > Oloop | | t <EPS) break; T*=DELTA; }}intMainintargcConst Char*Argv[]) {freopen ("Tsp.data","R", stdin); intn;    Input (p, n);    Getdist (p, n);    Init (n);    SA (n);    Print (bestpath, n); printf ("Total test times is:%d\n", ncase); return 0;}

Note that this is a Monte carlo-based approach, so the above code does not exactly match the results each Time. You can get a better result by increasing the number of Iterations.

What we need to explain here is that in the previous article we used the example of minimum value to explain the execution of simulated annealing: if the result of the new round is smaller than the previous one, then we accept it, or we reject or accept it as a Probability. The probability of rejection increases as the temperature decreases (I.E. the number of iterations is increased) (I.E. the probability of acceptance becomes smaller).

But now we are faced with a tsp problem, how do we define or how to obtain the next round of the Hamilton path to be examined? In the example of the minimum value of a unary function, the next round is to move a small distance to the left or right. In the TSP problem, we can use a lot of ways in Fact. The GetNext () function in the code above is used to randomly exchange the order of two cities in a path. For example, if the current path is a->b->c->d->a, then the next path may be a->d->c->b->a, which is Exchange B and D. In the literature "3", the author sampled the Code as follows (we intercept a fragment, the complete code, please refer to the original text):

public class Tour{    ... ...    // Creates a random individual public void generateIndividual() { // Loop through all our destination cities and add them to our tour for (int cityIndex = 0; cityIndex < TourManager.numberOfCities(); cityIndex++) { setCity(cityIndex, TourManager.getCity(cityIndex)); } // Randomly reorder the tour Collections.shuffle(tour); } ... ...}

The author's approach is to make a random rearrangement of the previous path (which is obviously a strategy).

The data format of the Tsp.data is as follows, the number of the first row indicates the number of cities, the 2nd to the last row, each row has two numbers, the city coordinates (planar Cartesian coordinates). For example:
6
20 80
16 84
23 66
62 90
11 9
35 28
finally, the reader compiles the execution program and observes the results of the analysis Output.

References and recommended reading materials

"1" for Hamilton and TSP please refer to the following two information to learn More:

    • On NP problem from Hamilton Path
    • William J. Cook, a confused traveler: a ubiquitous computer algorithm problem, People's Post and Telecommunications press, 2013

"2" above the C + + code in the following two posts are given, the original author can not verify

    • http://www.henufz.cn/bencandy.php?fid=151&id=1894 scare urine, This incredibly is high school information discipline competition!!! It turns out we lost in high school ....

    • http://blog.csdn.net/acdreamers/article/details/10019849

"3" for the TSP problem of a Java language implementation of the source code, please refer to

      • Http://www.theprojectspot.com/tutorial-post/simulated-annealing-algorithm-for-beginners/6

Simulated annealing algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.