Dynamic programming analysis and summary--how to design and implement dynamic programming algorithm

Source: Internet
Author: User

When the algorithm design, often have this experience: if you already know that a topic can be solved with dynamic programming, then it is easy to find the corresponding dynamic programming algorithm and implementation; The difficulty of dynamic programming is not the realization, but the analysis and design--first you need to know that this topic needs to be solved by dynamic programming. In this paper, we mainly analyze the application of dynamic programming in the design and implementation of algorithm analysis, and explain the principle, design and implementation of dynamic programming. In many cases, we can intuitively think of dynamic programming algorithms, but in some cases dynamic programming algorithms are relatively hidden and difficult to find. This article is primarily for you to answer the biggest question: What types of problems can be used with dynamic programming algorithms.                                                                              How the dynamic programming algorithm should be designed. Dynamic Programming first--caching and dynamic programming

first, caching and dynamic planning


Example One: There is a staircase has 10 steps, the provisions of each step can only cross one or two levels, to ascend to the 10th level of the steps there are several different ways to go?


Analysis: It is clear that the corresponding mathematical expression of the problem is f (n) =f (n-1) + f (n-2), where F (1) =1, f (2) = 2. The natural condition is that the recursive function is used to solve:

int  solution (int n) {
	if (n>0 && n<2) return n;
	Return solution (N-1) + solution (n-2);
}
If we calculate f (10), we need to calculate F (9) F (8) first; However, when we calculate f (9), we also need to calculate F (8), it is obvious that F (8) has been calculated several times, there are repeated calculations, and the same F (3) is repeated calculation more times.  The core of algorithm analysis and design is to reduce the repetition calculation according to the characteristics of the problem. Without changing the structure of the algorithm, we can make the following improvements:
int dp[11];
int  solution (int n) {
	if (n>0 && n<2) return n;
	if (dp[n]!=0) return dp[n];
	Dp[n] = solution (n-1) + solution (n-2);
	return  dp[n];
}
This is a recursive-shaped notation, and further, we can remove the recursion:
int  solution (int n) {
	int dp[n+1];
	dp[1]=1;dp[2]=2;
	for (i = 3; I <= n; ++i) {
		dp[n] = dp[n-1] + dp[n-2];
	}
	return  dp[n];
}
Of course, we can further streamline, using only two variables to save the first two calculations; This algorithm is reserved for the reader to implement

Example Two: 01 knapsack problem

There are n weights and values of vector<int> weight, vector<int> value, and the maximum load of the backpack is W for the maximum value of the items that can be loaded with a backpack. Input: N =4
weight=2, 1, 3, 2
Value = 3, 2, 4, 2
W=5
Output =7


Consider one: we can use the exhaustive method to list all combinations of n items, from which to select the maximum value that meets the criteria:

The use of exhaustive method, it is necessary to be able to cite all States, not heavy, and how to poor, the methods are various, our task is to be poor with n elements composed of all subsets. There are two main ways to do this: incremental (all numbers within 1~100, from 1 to 100), and a split-style exhaustive (for example, a set of n elements containing two-containing element A and no element a). So, we get the first algorithm of knapsack problem based on exhaustive method--recursion and division.

int rec (int i, int j) {//items from I to n, select the maximum value of the item with a weight not greater than J
	int res;
	if (i==n) {
		res=0;
	} 
	else if (j< W[i]) {
		res = rec (i+1, j);
	}
	else{
		res = max (rec (i+1, J), Rec (i+1, j-w[i]) +v[i]);
	}
	return res;
}

Call Res (0, W) to get the result. Time complexity O (2^n); Let's analyze the situation of recursive invocation.


In order to be lazy, the last line is not drawn, but note the red part, we will find (3, 2) This sub-problem has been calculated two times, it is clear that if the problem is large enough, the data is sufficiently diverse, this repeated calculation will lead to more time spent.


Improved: A strategy with recursion and caching
At this point, the time complexity is O (NW); The code is omitted from writing.


thinking Two: the memory search above, if you can turn recursion into a loop, this is the dynamic programming, the corresponding mathematical expression is as follows:

DP[I][J] = max (Dp[i+1][j], Dp[i+1][j-w[i]] + v[i]);//The corresponding calculation table is as follows and the procedure is as follows:
void solution () {
	fill (dp[n], dp[n]+w, 0); for
	(int i = n-1; I >= 0; i.) {for
		(j = 0; J <= W; ++j) {
			if (J < w[i]) dp[i][j] = dp[i+1][j];
			else dp[i][j] = max (Dp[i+1][j], dp[i+!] [J-w[i]]+v[i]);
		}
	}
	return dp[0][w];
}

thinking Three: The diversification of recursive forms


We have just recursive calculations in which the dimension of I is reversed, and we can also use positive DP. Specified Dp[i][j] indicates that the maximum value of the weight within J can be selected in the first item I, there is a recursive
DP[I][J] = max (Dp[i-1][j], Dp[i-1][j-w[i]] + v[i]);

think four: how we think of recursive algorithms.


Perhaps, the difficulty of the DP algorithm is not to tell you that the problem needs to be solved by DP, and then let you implement the algorithm. The first thing you have to realize is that this topic needs to be solved by recursion, and here we summarize the typical features of the DP algorithm by analyzing the above thought steps:
The 1>DP algorithm originates from the solution of a problem in dc--, which can be decomposed into solving a series of sub-problems, including overlapping sub-problems: So, we get the first golden rule of DP algorithm: A problem with independent and overlapping word problems, sub-problem is not independent, can not be divided; Without the need for DP, direct use of ordinary division of the law can be.
2>DP Algorithm Gold Guideline 2: Optimal sub-problem-the optimal solution of sub-problem can be introduced to the original problem.

We're still looking at the decision tree above, and it's clear that the essence of DP is caching. When we look for DP results, we often need to traverse the tree to find the optimal solution. But in some cases, what we need to look for is not the optimal solution, but the workable solution, which is often more efficient with DFS or loops, and we'll give an example later. At this point, we just need to remember that the second condition of dynamic programming-the optimal sub-problem.

So the idea of the algorithm is not in a moment to think of a problem can use the DP algorithm, but in the first look can use the poor lifting method, if you can use the problem can be decomposed, divide and conquer law + poor lift can be solved; If the problem contains overlapping word problems, and is the solution to the optimal solution, then the dynamic programming.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.