Thanks to the original author, transferred from:http://www.cnblogs.com/heaad/archive/2010/12/20/1911614.html
I. Mountain climbing algorithm (hill climbing)
This paper introduces the algorithm of mountain climbing before simulated annealing. The mountain climbing algorithm is a simple greedy search algorithm, which chooses an optimal solution from the adjacent solution space of the current solution as the current solution until a local optimal solution is reached.
The implementation of mountain climbing algorithm is very simple, its main disadvantage is that it will fall into local optimal solution, but not necessarily can search the global optimal solution. 1: Assuming that the C point is the current solution, the local optimal solution of the mountain climbing algorithm searching for point A will stop the search, because at a point no matter in the direction of the small amount of movement can not get a better solution.
Figure 1
Two. Simulated annealing (sa,simulated annealing) thought
The mountain climbing method is the complete greedy method, each time sighted chooses a current optimal solution, therefore can only search to the local optimal value. Simulated annealing is actually a greedy algorithm, but its search process introduces random factors. The simulated annealing algorithm takes a certain probability to accept a solution that is worse than the current solution, so it is possible to jump out of the local optimal solution to achieve the global optimal solution. Taking Figure 1 as an example, the simulated annealing algorithm will accept the move of E in a certain probability after searching for the local optimal solution A. Maybe after a few times this is not the local optimal move will reach D point, then jumped out of the local maximum of a.
The simulated annealing algorithm describes:
If J (Y (i+1)) >= J (Y (i)) (that is, a better solution is obtained after moving), the movement is always accepted
If J (Y (i+1)) < J (Y (i)) (that is, the solution after the move is less than the current solution), the move is accepted at a certain probability, and the probability decreases gradually over time (decreasing gradually to stabilize)
The calculation of "certain probability" here refers to the annealing process of metal smelting, which is the origin of the name of the simulated annealing algorithm.
According to the thermodynamic principle, when the temperature is T, there is a probability that the energy difference for the de's cooling is P (DE), expressed as:
P (DE) = exp (de/(KT))
where k is a constant, exp represents a natural exponent, and de<0. The formula is plainly: the higher the temperature, the greater the probability that the energy difference will reduce the cooling of the de, and the lower the temperature, the less the probability of cooling. And since DE is always less than 0 (otherwise it is not called annealing), so de/kt < 0, so the function value range of P (DE) is (0,1).
As the temperature t decreases, the P (DE) gradually decreases.
We see the movement of a poor solution at once as a temperature-hopping process, and we accept such movement with probability P (dE).
There is an interesting metaphor for the mountain climbing algorithm and simulated annealing:
Mountain climbing algorithm: The rabbit jumps to the place higher than now. It found the highest mountain not far away. But this mountain is not necessarily Everest. This is the mountain climbing algorithm, it can not guarantee that the local optimal value is the global optimal value.
Simulated annealing: The rabbit was drunk. It jumped randomly for a long time. During this period, it may go high, or it may step into the ground. However, it gradually woke up and jumped in the highest direction. This is simulated annealing.
A pseudo-code representation of simulated annealing is given below.
/** J (y): The value of the evaluation function at status y * Y (i): Indicates the current state * Y (i+1): Indicates the new state * R: for controlling the speed of cooling * T: System temperature, the system should initially be in a high temperature state * T_min: The lower limit of temperature, if the temperature T reaches T_min, Stop the search*/ while(T >t_min) {DE= J (Y (i+1) ) -J (Y (i)); if(DE >=0)//to get a better solution after expressing a move, you always accept the moveY (i+1) = Y (i);//Accept movement from Y (i) to Y (i+1) Else { //The value range of the function exp (de/t) is (0,1), and the greater the de/t, the greater the exp (de/t) if(exp (de/t) > Random (0,1)) Y (i+1) = Y (i);//Accept movement from Y (i) to Y (i+1)} T= R * T;//cooling annealing, 0<r<1. The larger r, the slower the cooling, the smaller the R, the faster the cooling /** If R is too large, the search for the global optimal solution may be higher, but the search process will be longer. If R is too small, the search process will be fast, but it may end up with a local optimal value*/I++ ;}
Four. Using simulated annealing algorithm to solve the traveling quotient problem
Traveling salesman problem (TSP, traveling salesman problem): There are N cities, asking for one of these problems, the only way to traverse all cities, and then go back to the city of departure, to find the shortest route.
The traveling salesman problem belongs to the so-called NP complete problem, the accurate solution tsp can only by the exhaustive path combination, its time complexity is O (n!).
The approximate optimal path of TSP can be obtained quickly by using simulated annealing algorithm. (using the genetic algorithm is also possible, I will introduce in the next article) simulated annealing to solve the TSP idea:
1. Generate a new traverse path P (i+1), calculate the length of the path P (i+1) L (P (i+1))
2. If L (P (i+1)) < L (P (i)), accept P (i+1) as a new path, otherwise accept P (i+1) as the probability of simulated annealing, and then cool down
3. Repeat steps until exit conditions are met
There are a number of ways to generate a new traversal path, with 3 of them listed below:
1. Randomly select 2 nodes, exchanging the order of the 2 nodes in the path.
2. Randomly select 2 nodes to reverse the sequence of nodes between the 2 nodes in the path.
3. Randomly select 3 node m,n,k, then shift the node between node m and N to after node K.
Five. Algorithm evaluation
The simulated annealing algorithm is a stochastic algorithm, and it is not always possible to find the global optimal solution, and the approximate optimal solution of the problem can be found quickly. If the parameters are set properly, the simulated annealing algorithm is more efficient than the poor lifting method.
[Turn] plain English analytic simulated annealing algorithm