When designing algorithms, we often have this experience: If you already know a question, you can use dynamic planning to solve it, it is easy to find and implement the corresponding dynamic planning algorithm; the difficulty of a dynamic planning algorithm lies not in implementation, but in analysis and design. First, you must know that this question needs to be solved using dynamic planning. In this article, we mainly analyze the application of Dynamic Planning in algorithm analysis design and implementation, and explain the principles, Design and Implementation of Dynamic Planning. In many cases, we may intuitively think of Dynamic Planning Algorithms, but in some cases, dynamic planning algorithms are relatively hidden and hard to find. This article is mainly used to answer this biggest question: what types of questions can I use dynamic planning algorithms? How should we design a dynamic planning algorithm?
Dynamic Planning lecture 1-Cache and Dynamic Planning
I. caching and Dynamic Planning
Example 1:There are 10 levels of stairs. Each step can only Span one or two levels. How many different steps are there to climb 10th levels?
Analysis: Obviously, the corresponding mathematical expression for this question is F (n) = f (n-1) + f (n-2); Where F (1) = 1, F (2) = 2. Naturally, recursive functions are used to solve the problem:
<pre name="code" class="cpp">int solution(int n){if(n>0 && n<2) return n;return solution(n-1) + solution(n-2);}
If we calculate F (10), we need to calculate F (9) f (8) first; but when we calculate F (9), we need to calculate F (8 ), obviously, F (8) is calculated multiple times, and there is repeated computation. Similarly, F (3) is computed more times. The core of algorithm analysis and design is to reduce repeated Computing Based on the characteristics of the question. Without changing the algorithm structure, we can make the following improvements:
int dp[11];int solution(int n){if(n>0 && n<2) return n;if(dp[n]!=0) return dp[n];dp[n] = solution(n-1) + solution(n-2);return dp[n];}
This is a recursive form. Further, we can remove recursion:
int solution(int n){int dp[n+1];dp[1]=1;dp[2]=2;for (i = 3; i <= n; ++i){dp[n] = dp[n-1] + dp[n-2];}return dp[n];}
Of course, we can further streamline the algorithm by simply using two variables to save the previous two computation results. This algorithm remains for the reader to implement.
Example 2: 01 backpack Problems
There are n items whose weight and value are vector <int> weight and vector <int> value respectively. The maximum weight of a backpack is W. How can I find the maximum value of an item that can be packed with a backpack?
Input: N = 4
Weight = 2, 1, 3, 2
Value = 3, 2, 4, 2
W = 5
Output = 7
Thought 1:We can
Use the exhaustive methodTo list all the combinations of N items and select the maximum value that meets the conditions:
When we use the exhaustive method, we must be able to cite all States without any further efforts. However, how to do it is as diverse as possible. Our task is to enumerate all subsets composed of n elements. However, there are two ways to do this: increment (CITE 1 ~ All numbers in the range of 100, from 1 to 100); and divide-based exhaustion (for example, a set of n elements, contains two types-containing element a and element ). Therefore, we obtained the first algorithm of the knapsack problem based on the exhaustive method-recursion and grouping.
Int Rec (int I, Int J) {// from item I to item n, select the maximum value of the item whose weight is not greater than J int res; if (I = N) {res = 0;} else if (j <W [I]) {res = Rec (I + 1, J );} else {res = max (Rec (I + 1, J), REC (I + 1, J-W [I]) + V [I]);} return res ;}
Call res (0, W) to obtain the result. time complexity O (2 ^ N); Let's analyze the recursive call situation.
In order to be lazy, the last line is not drawn, but pay attention to the red part. We will find that (3, 2) This subproblem is calculated twice. Obviously, if the problem is large enough and the data is diverse enough, this type of repetitive computing will lead to more time consumption.
Improvement: recursive caching
At this time, the time complexity is O (NW); the Code will be omitted.
Thinking 2:If you can change recursion into a loop in the memory search above, this is dynamic planning. The corresponding mathematical expression is as follows:
DP [I] [J] = max (DP [I + 1] [J], DP [I + 1] [J-W [I] + V [I]); // the corresponding computing table and program are as follows: void solution () {fill (DP [N], DP [N] + W, 0); For (INT I = n-1; I> = 0; -- I) {for (j = 0; j <= W; ++ J) {If (j <W [I]) DP [I] [J] = DP [I + 1] [J]; else DP [I] [J] = max (DP [I + 1] [J], DP [I +!] [J-W [I] + V [I]) ;}} return DP [0] [W];}
Thinking 3: diversity of recursive forms
The recursive computation we just made is inverse in the dimension of I. We can also adopt a forward DP. It is specified that DP [I] [J] indicates that the maximum value of weight within J can be selected in item I before, and a recursive formula is provided.
DP [I] [J] = max (DP [I-1] [J], DP [I-1] [J-W [I] + V [I]);
Thought 4: How do we think of recursive algorithms?
Maybe the difficulty of the DP algorithm is not to tell you that you need to use DP to solve the problem, and then let you implement the algorithm. First, you must realize that this question needs to be solved recursively. Here we will summarize the typical features of the DP algorithm by analyzing the above thinking steps:
1> the DP algorithm originated from the DC-a solution to a problem. It can be first decomposed into solutions for a series of subproblems, including overlapping subproblems: Therefore, we get the first Golden criterion of the DP algorithm: a problem has an independent and overlapping word problem; subproblems are not independent and cannot be divided. subproblems do not overlap and there is no need for DP, you can simply use the common divide and conquer method.
2> DP algorithm golden criterion 2: optimal subproblem-the optimal solution of a subproblem can provide the optimal solution of the original problem.
Let's look at the decision tree above. Obviously, the essence of DP lies in caching. When looking for the DP results, we often need to traverse the tree to find the optimal solution. However, in some cases, what we need is not the optimal solution, but the feasible solution. In this case, the DFS or loop is often used for more effective purposes. We will give an example later. At this point, we only need to remember that the second condition of dynamic planning is the optimal subproblem.
Therefore, the idea of algorithm design is not to think of a problem at once, but to use the DP algorithm, but to check whether the problem can be solved first. If the problem can be solved, the Division, control, and exhaustion can be solved; if the problem contains an overlapping word problem and is the optimal solution, use dynamic planning.
Dynamic Planning Analysis Summary-how to design and implement Dynamic Planning Algorithms