OptimizationAlgorithmEntry SeriesArticleDirectory (updating ):
1. Simulated Annealing Algorithm
2. Genetic Algorithm
I. hill climbing)
This section describes the climbing algorithm before simulated annealing. The climbing algorithm is a simple greedy search algorithm. It selects an optimal solution from the near solution space of the current solution every time as the current solution until it reaches a local optimal solution.
The implementation of the climbing algorithm is very simple. Its main disadvantage is that it will fall into the local optimal solution, rather than searching for the global optimal solution. 1. If point C is the current solution, the climbing algorithm will stop searching when it finds the local optimal solution of point A, because a cannot get a better solution when it moves slightly in that direction.
Figure 1
II. The idea of Simulated Annealing
Climbing is a completely greedy method. You can only search for the local optimal value every time you choose the current optimal solution. Simulated annealing is actually a greedy algorithm, but its search process introduces random factors. Simulated Annealing AlgorithmWith a certain probabilityTo accept a solution that is worse than the current solution.PossibleWill jump out of this local optimal solution to achieve the global optimal solution. Taking Figure 1 as an example, after the simulated annealing algorithm finds the local optimal solutionWith a certain probabilityAccept e's move. After several times of such a non-local optimal movement, it may arrive at the D point, so it jumps out of the local maximum.
Description of the simulated annealing algorithm:
If J (Y (I + 1)> = J (Y (I) (that is, a better solution is obtained after moving), it is always accepted
If J (Y (I + 1) <j (Y (I) (that is, the moving solution is worse than the current solution ),Moving with a certain probability, and this probability gradually decreases over time (gradually decreases to stabilize)
The Calculation of "a certain probability" here refers to the annealing process of metal smelting, which is also the origin of the simulated annealing algorithm name.
According to the thermodynamic principle, when the temperature is t, the probability of a temperature drop with the Energy Difference de is P (de), expressed:
P (de) = exp (DE/(KT ))
K is a constant, exp represents the natural index, and De <0. To put it bluntly, the higher the temperature, the higher the probability of a temperature difference of De. The lower the temperature, the lower the probability of a temperature decrease. Because de is always less than 0 (otherwise it will not be called annealing), so de/KT <0, so the value range of P (de) function is (0, 1 ).
With the decrease of temperature T, P (de) will gradually decrease.
We think of a shift to a poor solution as a temperature jump process. We accept this shift with probability P (de.
There is an interesting analogy between the mountain climbing algorithm and simulated annealing:
Climbing algorithm: the rabbit jumps to a higher position. It finds the highest mountain not far away. But this mountain is not necessarily Mount Everest. This is the climbing algorithm. It cannot guarantee that the local optimal value is the global optimal value.
Simulated Annealing: rabbits are drunk. It jumps randomly for a long time. During this period, it may go up or enter the ground. However, it gradually becomes awake and jumps in the highest direction. This is simulated annealing.
The following describes the pseudo-Annealing of simulated annealing.Code.
3. pseudo code of Simulated Annealing Algorithm
Code
/*
* J (y): the evaluation function value in the Y state.
* Y (I): indicates the current status.
* Y (I + 1): indicates the new status.
* R: used to control the cooling speed
* T: system temperature. The system should be in a high temperature state at first.
* T_min: minimum temperature. If the temperature T reaches t_min, stop searching.
*/
While (T > T_min)
{
De = J (Y (I + 1 )) - J (Y (I ));
If (De > = 0 ) // If you get a better solution after expression moving, you always accept moving.
Y (I + 1 ) = Y (I ); // Accept the movement from Y (I) to Y (I + 1)
Else
{
// The value range of the Function exp (DE/T) is (0, 1). If the value of De/T is greater, the exp (DE/T) is
If (Exp (de / T) > Random ( 0 , 1 ))
Y (I + 1 ) = Y (I ); // Accept the movement from Y (I) to Y (I + 1)
}
T = R * T; // Temperature annealing, 0 <r <1. The larger the R, the slower the cooling; the smaller the R, the faster the cooling.
/*
* If R is too large, the global optimal solution may be found, but the search process is long. If R is too small, the search process will be fast, but it may eventually reach a local optimal value.
*/
I ++ ;
}
4. Use simulated annealing algorithm to solve the Traveling Salesman Problem
Traveling salesman problem (TSP): There are n cities that need to start from a certain problem. The only one who wants to traverse all the cities and then return to the city to find the shortest route.
The traveling salesman problem is a so-called NP-complete problem. To solve the problem accurately, the TSP can only combine all the paths, and its time complexity is O (n !) .
The simulated annealing algorithm can be used to quickly obtain an approximate optimal path of the TSP. (It is also possible to use genetic algorithms. I will introduce it in the next article) The idea of simulated annealing for Solving TSP:
1. generate a new traversal path P (I + 1), calculate the length of path P (I + 1) L (P (I + 1 ))
2. if l (P (I + 1) <L (P (I), P (I + 1) is accepted as the new path, otherwise, P (I + 1) is accepted at the probability of simulated annealing, and then cooled down.
3. Repeat steps 1 and 2 until exit conditions are met.
There are many ways to generate new traversal paths. Three of them are listed below:
1. Randomly select two nodes and switch the order of the two nodes in the path.
2. Randomly select two nodes to reverse the order of the two nodes in the path.
3. Randomly select three nodes m, n, k, and then shift the nodes between node M and N to the end of node K.
5. algorithm Evaluation
The simulated annealing algorithm is a random algorithm that does not necessarily find the global optimal solution, but can quickly find the approximate optimal solution of the problem. If the parameters are properly set, the search efficiency of the simulated annealing algorithm is higher than that of the exhaustive method.
From here: http://www.cnblogs.com/heaad/ reprint please note