1. First look at how Wikipedia defines the dynamic programming
Quoted Wiki:dynamic programming
In mathematics, management, economics, computer, and bioinformatics, dynamic programming (also known as Dy Namic optimization) is a method for solving a complex problem by breaking it down into a collection of simpler subproblems , solving each of those subproblems just once, and storing their solutions-ideally, using a memory-based data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously compute D solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the Subproblem solutions are indexed in some, typically based on the values of their input parameters, so as to facilitate its lookup.) The technique of storing solutions to subproblems instead of recomputing them was called "memoization".
Dynamic programming algorithms is often used for optimization. A dynamic Programming algorithm would examine the previously solved subproblems and would combine their solutions to give th e best solution for the given problem. In comparison, a greedy algorithm treats the solution as some sequence of steps and picks the locally optimal choice at ea Ch step. Using a greedy algorithm does not guarantee an optimal solution, because picking locally optimal choices could result in a B AD Global solution, but it's often faster to calculate. Fortunately, some greedy algorithms (such as Kruskal ' s or Prim ' for minimum spanning trees) is proven to leads to the OPT iMAL solution.
For example, in the coin change problem of finding the minimum number of coins of given denominations needed to make a giv En amount, a dynamic programming algorithm would find an optimal solution for each amount by first finding an optimal Solu tion for each smaller amount and then using these solutions to construct a optimal solution for the larger amount. In contrast, a greedy algorithm might treat the solution as a sequence of coins, starting from the given amount and at EAC H Step subtracting the largest possible coin denomination that's less than the current remaining amount. If The coin denominations is 1,4,5,15,20 and the given amount is at, this greedy algorithm gives a non-optimal solution O F 20+1+1+1, while the optimal solution is 15+4+4.
In addition to finding optimal solutions to some problem, dynamic programming can also is used for counting the number of Solutions, for example counting the number of ways a certain amount of change can is made from a given collection of coins , or counting the number of optimal solutions to the coin change problem described above.
Sometimes, applying memoization to the naive recursive algorithm (namely the one obtained by a direct translation of the P Roblem to recursive form) already results in a dynamic programming algorithm with asymptotically optimal time complexity Optimization problems in general the optimal algorithm might require more sophisticated algorithms. Some of these is recursive (and hence can is memoized) but parametrized differently from the naive algorithm. For other problems the optimal algorithm is not even is a memoized recursive algorithm in any reasonably natural sense. An example of such a problem is the EGG dropping puzzle described below.
In the second and third paragraphs of the article with yellow background text, the second paragraph illustrates the different from the greedy algorithm and through the third paragraph of the example, the same problem greedy algorithm may not get the global optimal solution, 23=20+1+1+1, and dynamic programming is 23=15+4+4 to get the global optimal solution
There is also a picture in the wiki greedy algorithm: to show that the greedy algorithm may not get the global optimal solution, but fortunately Kruskal's or Prim 's for minimum spanning trees to achieve the global optimal solution
2. Then I look at Tsinghua University graduate course textbook---Mathematics series---optimization Theory and algorithm (second edition) Chen Baolin
In the last 16th chapter, the dynamic programming is introduced with the shortest route problem example, and several common terms in dynamic programming are defined.
1. Stage
2. Status
3. Decision-making
4. Strategy
5. State transition equation
6. Indicator functions
7. Optimal strategy and optimal rail line
In the second section, the optimality principle of R.bellman is presented: the sub-strategy of an optimal strategy is always optimal.
3. Dynamic programming in the introduction of algorithms
Two elements in the optimization problem of dynamic programming methods are mentioned: optimal substructure and overlapping sub-problem.
For details, see Introduction to Algorithms P202
4. Then look at the answer of the Great God :
Xu Kaijiang
Links: https://www.zhihu.com/question/23995189/answer/35324479
The recursive method in dynamic programming is not the essence of dynamic programming.
I have participated as a member of the provincial team Noi, after the walk also to the school to participate in Noip's classmates many times to talk about dynamic planning, I try to say I understand
Dynamic Planning, to strive for the layman. I hope you have read my answer and can enjoy the dynamic planning.
0. The nature of dynamic planning is the question
definition of StateAnd
definition of state transition equation 。
Quoted from Wikipedia
Dynamic Programming is a method for solving a complex problem by
breaking it down into a collection of s Impler subproblems.
Dynamic planning is through
split problem,Define the relationship between the state of the problem and the state so that the problem can be solved in a recursive (or divided) manner.
Most of the other answers are in the way of the recursive solution, but
How to split a problem, is the core of dynamic planning.
and
Split Problem, depends on the
definition of StateAnd
definition of state transition equation 。
1. What is
definition of state?
First of all, we must not be intimidated by the following mathematical style, here only related to the knowledge of the function.
Let's take a look at a dynamic programming teaching essential question:
given a sequence, the length is N,
The length of the longest ascending (ascending) sub-series (LIS) of this sequence.
To
1 7 2 8 3 4
As an example.
The longest increment sub-series of this sequence is 1 2 3 4 with a length of 4;
The length of the undersecretary is 3, including 1 7 8; 1 2 3 and so on.
To solve this problem, we must first
Define this problemand the sub-problem of the problem.
Some people may ask, the topic is already here, we still need to define this question? Need, the reason is that this problem in the literal, can not find the sub-problem, and no sub-problem, this topic can not be solved.
So let's redefine the problem:
given a sequence, the length is N,
Set to: The length of the longest increment subsequence ending with the K term in the sequence.
The maximum value in the request.
Obviously, this new problem is equivalent to the original problem.
And for speaking, it is a sub-problem: Because the longest increment subsequence ending with the K term (hereinafter called LIS), contains the LIS at the end of an item in paragraph.
The new problem can also be called the state, the definition of "the length of the LIS at the end of the K-term in the series", which is called the definition of state.
The reason for doing "state" rather than "problem" is to avoid confusion with the "problem" in the original problem, and second, because the new problem is mathematically defined.
Is there only one definition of state?
of course not .。
We can even define the problem in two dimensions in a completely different perspective:
given a sequence, the length is N,
Set to:
In the first I term, the minimum value of the last digit in the longest increment subsequence of length k. .
If there is no longest increment subsequence of length k in the first I term, then positive infinity.
The maximum x, so that it is not positive infinity.
This new definition and the equivalence of the original problem is not difficult to prove, please understand the reader.
The above is the state, in the definition of "for: in the first I term, the longest increment subsequence of length k, the last one of the minimum value" is the definition of the state.
2. What is
state transition Equation?
When the above states are defined, the relationship between state and state is called
state transition equation.
For example, for the LIS problem, our first definition:
set to: The length of the longest increment subsequence ending with the K term in the sequence.
Set A as the sequence of the problem, the state transfer equation is:
(export boundary conditions based on state definition)
Explain it in words:
The length of the LIS at the end of the K term is: to ensure that item I is smaller than k, the length of the LIS at the end of item I is added to the maximum value of one, and all values of I are taken over (I is less than k).
The second definition:
set to: In the pre-sequence I term, the length of the increment subsequence of K, the minimum value of the last one
Set A as the sequence of the problem, the state transfer equation is:
If you
otherwise:
(Boundary conditions require more categorical discussion, not listed here, and you need to export boundary conditions based on the state definition.) )
Everyone set a definition to read the formula can be, it should not be difficult to understand, is a bit around.
As can be seen here, the state transfer equation is defined as the relationship between the problem and the sub-problem.
As can be seen, the state transition equation is a recursive formula with conditional.
3. Dynamic Programming Myths
Other users of the subject under the answer and dynamic planning are more or less connected, I also talk about the answer to this link.
A. "Cache", "overlapping sub-problem", "Memory":
These three nouns are all described in the technique of recursive solution. Taking the Fibonacci series as an example, when calculating the 100th item, it is necessary to calculate
Item 99thAnd 98, in the calculation of item 101th, the 100th and
Item 99th, do you still need to recalculate item 99th? No, you just have to write it down on the first calculation.
The "99th item", which needs to be recalculated, is called "overlapping sub-problem". If it has not been calculated, follow the recursive calculation, if calculated, direct use, like "cache", this method, called "Memory", this is the technique of recursive solution. This technique, popularly called "spending space to save time".
are not the essence of dynamic programming, not the core of dynamic programming.
B. "Recursion":
Recursion is a recursive method of solving, even the skills are not.
C. "No effect", "Optimal substructure":
In the state transition equation, the right side of the equation will not use the subscript greater than the left I or K value, which is "no effect" of the popular mathematical definition, in line with the definition of the state definition, we can say that it has "optimal substructure" of the nature, in the dynamic planning we have to do is to find this "optimal substructure."
In the process of defining State and state transition equations, satisfying "optimal substructure" is an implied condition (otherwise it is not defined at all). A further explanation of the relationship between state and "optimal substructure", what is dynamic planning? What is the meaning of dynamic planning? -Wang Meng's answer is very good, you can read it.
It is important to note that a problem may have many different definition of state and state transition equations, and there is a definition of aftereffect,
does not represent that the problem does not apply to dynamic planning。 This is also a logical misunderstanding in several other answers:
Dynamic programming method to find the definition of state and state transfer equation conforming to "optimal substructure"
,After the finding, the problem can be solved by the method of "memory to solve the recursive type". The definition that is found is the essence of dynamic programming.
one answer says:
divide and conquer every sub-problem, we have to do it all over again.
Dynamic programming stores the results of sub-problems, and the time of check table is constant
This is like saying that more chili dish is called Sichuan cuisine, more soy sauce dishes are called Shandong, there is misunderstanding.
Literary and artistic theory, dynamic programming is to find a perspective of the problem, so that the problem can be recursive (or divide the rule) to solve the way. Looking at the angle of the problem is the most dazzling gem in dynamic planning! Fog
The nature of dynamic programming is not recursive or recursive, and does not need to tangle is not memory change time. Understanding dynamic planning does not require a mathematical formula to intervene, but it simply explains the need for a bit of space ... First, you need to understand which problems are not dynamic planning can be solved, in order to understand the need for God horse dynamic planning. But the advantage of the way to understand the recursive greedy search ... Showing the nature of all dynamic planning is not recursive or recursive, and does not need to tangle is not memory time.
Understanding dynamic planning does not require a mathematical formula to intervene, but it simply explains the need for a bit of space ... First, you need to understand which problems are not dynamic planning can be solved, in order to understand the need for God horse dynamic planning. But the good thing is, by the way, it's clear what the relationship between recursive greedy search and motion rules is, and how to help those students who always take the rules as a search solution to build rules. Of course, familiar with the following can be directly based on the description of the problem to get ideas, if necessary, then add it.
Dynamic planning is a solution to a certain class of problems!! The focus is on how to identify "a certain type of problem" is a dynamic programming solvable rather than a tangled solution is recursive return is recursive!
How to identify a class of DP solvable problems need to start with how the computer works ... The nature of the computer is a state machine, all the data stored in memory constitutes the current state, the CPU can only use the current state to calculate the next state (do not tangle with the hard disk and other external storage, even if they are only to expand the state of storage capacity, Does not change the next state can only be calculated from the current state of this iron law)
When you try to use a computer to solve a problem, you are actually thinking about how to express the problem as a state (which variables to store which data) and how to transfer it in the state (how to calculate other variables based on some variables). So the so-called spatial complexity is the number of states that must be stored to support your calculations, so-called time complexity is how many steps it takes to reach the final state from the initial state!
Too abstract or to give an example:
For example, I want to calculate the 100th non-wave number, each non-wave number is a state of the problem, each to find a new number only need the previous two states. So at the same moment, at most only two states are saved, space complexity is constant, and the time required to calculate a new state is constant and the state is linearly increasing, so time complexity is also linear.
The above state calculation is straightforward, only need to follow a fixed pattern from the old state to calculate the new state of the line (A[i]=a[i-1]+a[i-2]), do not need to consider whether the need for more states, and do not need to choose which old state to calculate the new state. For such a solution, we call recursion.
The non-DMC example is too simplistic to overlook the concept of the stage, which refers to the set of different states that may be obtained at the same moment as the problem is resolved. In a non-wave sequence, each step calculates a new number, so there is only one state for each stage. Imagine another problem scenario, if you put you on a chess board on a certain point, you can only walk one grid at a time, because you can walk in the same place, so you may be in a lot of different positions when you walk four steps. Take a few steps from the beginning is the first few stages, take the N-step may be in a position called a state, walked the N-Step all possible locations of the set is the stage of all possible states.
Now the problem comes, after the stage, the calculation of the new state may encounter a variety of wonderful situation, for different situations, you need a different algorithm, the following points to explain:
If the problem has n phases, each stage has multiple states, the number of States at different stages does not have to be the same, and a state of one stage can be a few of all States in the next phase. Then we have to figure out the number of States in the final phase, which naturally goes through certain states of each previous stage.
The good news is that sometimes we don't really have to calculate all the states, such as a mentally retarded checkerboard problem: It takes a few steps to get to the lower right corner from the upper left corner of the board. The answer is clearly that the problem with such a retarded is to help us understand the stages and states. A stage can actually have multiple states, as in this problem, the N-step can go to many positions. But in the same N-step, what are the positions that will allow us to go farthest in step n+1? Yes, it is the furthest position in the nth step. A familiar word is "the next best thing to get from the current best". Therefore, in order to calculate the ultimate optimal value, only need to store the optimal value of each step, to solve the problem of this nature of the algorithm is called greedy. What if the calculation process between the optimal states is not the same as the calculation of the non-wave sequence? So the method of calculation is recursive.
Since the problem can be divided into stages and states. So we solved a big problem at once: the optimum of one stage can be obtained by the best of the previous one.
What if the optimum of a stage cannot be obtained with the best of the previous phase?
What, you say you just need two stages before you get the current best? There is no essential difference between that and the previous phase. The most troubling thing is that you need all the things that you have done before.
Another example of a maze. When calculating the shortest route from the starting point to the end point, you can't just save the state of the current stage, because the problem requires you to be the shortest, so you have to know all the places that you've traveled before. Because even if your current position is not changed, the previous route will affect the route you follow. At this point you need to save the state that you went through at each stage, based on the information to calculate the next state!
The state of each stage may not be many, but each state can be transferred to the next stage of multiple states, so the complexity of the solution is exponential, so time complexity is also exponential. Oh, the previous route just mentioned will affect the next choice, and this unpleasant situation is called aftereffect.
Just now the situation is too common, the solution is too violent, is there any situation which can avoid such violence?
The opportunity lies in the aftereffect.
There is a kind of problem that seems to need all the state before, actually not. It is also an example of taking the longest ascending subsequence to illustrate why he does not need violent search, which leads to the idea of dynamic programming.
Pretend we're young and foolish to search for the longest ascending subsequence. How do we search? You need to enumerate from start to finish whether to choose the current number, each selected a number to see whether it satisfies the "rise" of the nature, the first stage here is to think about whether to choose the number I, the stage I phase has two states, respectively, is selected and not selected. Haha, vaguely appeared the maze to find the shadow of the road! Well, every time I decide to choose the current number, I just need to compare it with a number I've selected. This is different from the nature of the previous maze problem! This will allow us to not need to record all the previous state Ah! Since our choice has not been affected by the combination of the previous state, that time complexity is not the index of the natural AH! Although we don't care what the elements are before a sequence, we still need the length of this sequence. So we just need to record the LIS length at the end of an element! Therefore, the optimal solution of phase I is only obtained by the optimal solution of the first i-1 stage, then the DP equation (Thanks @ Help) is obtained.
So a question is whether to use recursion, greed, search or dynamic programming, which is entirely determined by the way the state transitions between phases of the problem itself!
Each stage has only one status---recursion;
The optimal state of each stage is the greedy one obtained from the optimal state of the previous stage;
The optimal state of each stage is a search by a combination of the states of all previous stages;
The optimal state of each phase can be obtained directly from some or some state of a previous phase, regardless of how the state was previously obtained.
the optimal state of each phase can be obtained directly from some or some states of a previous phase
This property is called the optimal substructure;
and no matter how this state was obtained,
This property is called no-effect.
Another: In fact, the optimal state of the dynamic programming is misleading, thinking that only need to calculate the optimal state, Lis problem is true, the transfer only used in each stage "select" state. But in fact, some problems often need to calculate an optimal value for all states of each phase, and then find the optimal state based on these optimal values. For example, the knapsack problem needs to calculate the maximum value for the first I packet (stage) capacity is J (state). The optimal value is then found for all state types in the last stage.
Algorithm: Dynamic programming