transferred from: http://blog.csdn.net/zouxy09/article/details/8537872
Author: zouxy09
First of all, some stereotyped writing (from Baidu Encyclopedia): Iterative method is also called the method, is a constant use of the old value of the variable recursive new value process, with the iterative method corresponding to the direct approach (or called a solution), that is, a one-time solution. Iterative algorithm is a basic method to solve the problem by computer. It uses the computer speed, suitable for repetitive operation characteristics, so that the computer to a set of instructions (or a certain number of steps) repeated execution, each time the set of instructions (or these steps), the variable from the original value of the introduction of its new value.
Using an iterative algorithm to solve the problem requires doing the following three things:
first, determine the iteration variables.
In a problem that can be solved with an iterative algorithm, there is at least one variable that, directly or indirectly, is continuously introduced with the new value by the old value, which is the iteration variable.
second, establish an iterative relationship.
The so-called iterative relationship, which refers to the formula (or relationship) that introduces its next value from the previous value of the variable. The establishment of iterative relationships is the key to solving iterative problems, which can usually be accomplished by means of recursive or backward-pushing.
Thirdly, the iterative process is controlled.
When to end the iteration process. This is an issue that must be considered in writing an iterative procedure. You cannot allow iterative processes to repeat indefinitely. The control of an iterative process can usually be divided into two situations: one is that the desired number of iterations is a definite value that can be calculated, and the other is that the number of iterations required cannot be determined. In the former case, a fixed number of loops can be constructed to control the iterative process, and in the latter case, the conditions for ending the iterative process need to be further analyzed.
I do not know whether we have found that since we entered the field of machine learning, the iteration will haunt, everywhere, no matter which paper, which algorithm, we can see its shadow. Have you ever wondered why. Why is the iteration so powerful? Then why is it everywhere? Many conventional means can not solve the problem, the iteration came to the edge of the solution. The iterations are really so divine.
Why use so frequently? Personal understanding because in fact, no matter what machine learning algorithm, eventually have to turn to the computer to solve, it also manifested in a particular function space according to an optimization target to search for a solution. What do you mean. Our goal is to let the machine to learn and understand the physical world, and this physical world is random, so we need to guess, we have a lot of speculation about something, then which is the most reliable guess, then we have to have indicators to measure it, what is the indicator? The error is the smallest or the highest performance indicator. How do you ask it to be the smallest or the biggest? You say we have a derivative, there is Lagrange. Yes, but what are the essential conditions that they can use? is when these error functions or performance functions have an analytic formula. But many of the world's signals are non-stationary, or it is difficult to know its statistical characteristics, at this time can not get its accurate analytic, then how to find the maximum or minimum value AH. Iterations. What is an iteration.
Optimize the problem, either the maximum, or the minimum, or climb the mountain from the foot of the hill, or from the mountain to the foot of the hill. Take the mountain climbing is also to ask for the maximum value for example. If you are standing at the foot of the hill, you can not see the top of the mountain, you know where the peak. All you know is that if I climb up every step of the way, I'm sure I can climb the mountain. So you're going up and down every step of the way, stopping to see which direction goes up the fastest, that's the steepest slope (the gradient of the performance surface), and then you just crawl in that direction and then stop to observe the quickest way up the mountain until you climb the top. This is the fastest rise method (hehe, the standard is generally found when the minimum value is called the steepest descent method).
If you are at the foot of the mountain, you can see the top of the hill, you do not care about 3,721, directly to the top of the direction of the not deadly rush, until reaching the peak. This is Newton's iterative method.
If you start at the foot of the hill, each crawl a little, and then the next direction you are casually, resigned, because you believe that God will take you to the top of the mountain (God is the hand of manipulating probabilities), this is the random search algorithm.
Well, when you have to climb a mountain peak, and found that there is a higher peak, but there is no way ah, if you think of that higher peak, you have to go downhill, and then climbed the top of the mountain. At this time, some people are satisfied, see the beauty he wants to see, happy with his local maximum, do not want to toss. But some people are not satisfied, you want the kind of "top of the mountain peak for me, a list of mountains small" state of mind, so you jump and roll to the foot of the mountain, if you good luck, then you roll to the top of the mountain peak, this time, you climb up, you can reach the highest peak. This is the algorithm that avoids being caught in the local maximum, looking for the global maximum, called simulated annealing or impulse, and so on.
So personal understanding, climbing or finding the best value depends on three factors:
1) which direction you go in each step;
2) How far you walk each step;
3) The maximum number of jumps from the local maximum to reach the highest requirement is high.
Personal feeling these three factors constitute a different optimization algorithm, or iterative algorithm.
In fact, life is an iterative process, every day in a new state, every day in constant renewal, slowly to your beautiful ideal close. Sometimes you find that life is very tiring, and you feel that success is a long way off, because your speed is too slow; sometimes you get too quick, you may miss your goal and accidentally turn over the hill. So the direction of your life and the choice of pace are very important. Oh.
In addition, the iterative method is very helpful to the calculation of some computational quantities. For example, to find the inverse of the matrix. Hehe, in the field of image, The matrix is big enough, then we solve the matrix of the inverse general use of the law, that is, the matrix of the adjoint matrix and its determinant of the ratio, this calculation is enough to amaze, although the faster algorithm and faster hardware appearance, but for real-time requirements of high-speed procedures, this rate is more difficult to accept But the iteration will make you very happy, because this time the calculation results of the last results, such as the first n+1 times to use the inverse of the nth, so that there is no need to n+1 time, re-calculate the n+1 dimension of the inverse of the square (hehe, it can be understood). For example, if you want a 100-storey building, you'll have to add a floor to a 99-storey floor, and you'll have to open a different stove and build a 100-storey building on a vacant lot faster. This is like what is sequential iterative method ah what, hehe.
Iterative method is a very connotation of things, on my so superficial pale language difficult to express his macro thick and open-minded, only in the vast knowledge of the sea and constantly meet, acquaintance, know and love.