The algorithm design, often have this experience: assuming that a topic can be solved with dynamic programming, then very easy to find the corresponding dynamic programming algorithm and implementation; The difficulty of dynamic programming is not the realization, but the analysis and design--first you need to know that the topic needs to be solved by dynamic programming.
In this paper, we mainly analyze the application of dynamic programming in the design and implementation of algorithm analysis, and explain the principle, design and implementation of dynamic programming. In very many cases, we can intuitively think of dynamic programming algorithms. However, in some cases dynamic programming algorithms are more subtle. Difficult to find.
This paper. The main answer to this biggest puzzle: What type of problem can use dynamic programming algorithms? How should the dynamic programming algorithm be designed?
Dynamic Programming first--caching and dynamic programming
First, caching and dynamic planning
Example One: there is a staircase with 10 steps. It is stipulated that each step can only be cross-level or two, and there are several different ways to ascend the 10th stage.
Analysis: It is very clear that the corresponding mathematical expression of the problem is f (n) =f (n-1) + f (n-2), where F (1) =1, f (2) = 2.
In a very natural situation, a recursive function is used to solve:
int solution (int n) {if (n>0 && n<2) return N;return solution (n-1) + solution (n-2);}
Suppose we calculate f (10), we first need to calculate F (9) F (8), but when we calculate f (9), we need to calculate F (8), it is very obvious that F (8) has been calculated many times. There are repeated calculations, and the same number of times F (3) is repeated is a lot more. The core of algorithm analysis and design is to reduce the repeated calculation according to the characteristics of the problem. Without changing the structure of the algorithm. We can do the following improvements such as:
int Dp[11];int solution (int n) {if (n>0 && n<2) return n;if (dp[n]!=0) return dp[n];DP [n] = solution (n-1) + solution (n-2); return dp[n];}
This is a recursive-shaped notation, and further, we can remove the recursion:
int solution (int n) {int dp[n+1];DP [1]=1;dp[2]=2;for (i = 3; I <= n; ++i) {Dp[n] = Dp[n-1] + dp[n-2];} return dp[n];}
of course, we can further streamline it. Only two variables are used to save the first two calculations; This algorithm is reserved for the reader to implement
Example Two: 01 knapsack problem
There are n weights and values of vector<int> weight, vector<int> value of the goods; Backpack maximum load is W, the maximum value of the items that can be loaded in the backpack?
input: N =4
weight=2, 1, 3, 2
Value = 3, 2, 4, 2
W=5
Output =7
think of one: we can
use the exhaustive method to list all combinations of n items, from which to select the maximum value that meets the criteria:
The use of exhaustive method, must be able to cite all the state, not heavy, and how to poor, the methods are various, our task is to be poor with n elements composed of all subsets.
There are two main ways to do this: incremental (all numbers within 1~100, from 1 to 100), and a split-style exhaustive (such as a collection of n elements). Includes two types-containing element A and without element a). So, we get the first algorithm of knapsack problem based on exhaustive method--recursion and division.
int rec (int i, int j) {//items from I to n, select the maximum value of items with weights not greater than J int Res;if (i==n) {res=0;} else if (j< W[i]) {res = rec (i+1, j);} Else{res = Max (Rec (i+1, J), Rec (i+1, j-w[i]) +v[i]);} return res;}
Call Res (0, W) to get the result. Time complexity O (2^n); Let's analyze the situation of recursive invocation.
in order to be lazy, the last line did not draw, but pay attention to the red part. We'll
Dynamic programming analysis and summary--how to design and implement dynamic programming algorithm