Dynamic Programming can be used to effectively solve many search optimization problems. This type of problem has the same location. The original problem can be divided into subproblems, And the subproblems have repeated parts (overlapping subproblem ), or the optimal solution of the original problem has the same structure as the optimal solution of the sub-problem (optimal substructure), and expands the optimal solution of a small sub-problem to the optimal solution of a large problem.
It is effective because it uses the bottom-up method to calculate the value and stores the intermediate results so that they can be used to calculate the required solution in the future.
Although it is easy to understand, dynamic planning can be divided into two steps in the application process. First, we can find out the optimal structure of the subproblem, and then find out the optimal solution of the subproblem to expand to the optimal solution of the large-scale subproblem. However, there are various problems in specific applications. Various dynamic planning problems will be discussed below, hoping to deepen the understanding of dynamic planning.
Feibonaci series, one of the typical problems of Dynamic Planning
F (1) = f (2) = 1, f (n) = f (n-1) + f (n-2 );
Directly solve the problem and use recursive methods to easily write out the problem,
def fib(n): if n==1: return 1 elif n==2: return 1 else: return fib(n-1)+fib(n-2)
Calculate f (5) and expand the call process as follows,
F (5) = f (4) + f (3)
= F (3) + f (2) + f (2) + f (1)
= F (2) + f (1) + f (2) + f (2) + f (1)
We can see that f (3) will be called during calculation of f (4), and f (4) will be called after calculation and play, and f (3) will be called, which contains a lot of repeated computing. Because it is a recursive call, if it is written as a loop, it is calculated from the bottom up,
def fib(n): if n==1: return 1 elif n==2: return 1 else: f = [1 for i in xrange(n)] for i in xrange(2,n): f[i]=f[i-1]+f[i-2] return f[n-1]
The storage space is n, in fact, the calculation of f (n) is only related to the first two N-1 and N-2, there is no need to save n items, so the code can be simplified as follows,
def fib(n): n2, n1 = 0, 1 for i in range(n-2): n2, n1 = n1, n1 + n2 return n2+n1
N = 0 f (0) = 0 is defined here, refer to http://20bits.com/article/introduction-to-dynamic-programming
The Feibonaci series has many subproblems that are repeated in the form of bottom-up computing. This avoids many duplicates and improves the computing efficiency. For the problem of the optimal sub-structure, we can take the largest and largest subsequence of the integer sequence as an example. Here we will overlap with the reference in the previous article. However, this is a better example.
The largest and most problematic integer subsequences
Given an integer sequence, find the largest sum of subsequences. For example, if array = [1, 2,-5, 4, 7,-2], we can see that the maximum subsequence is 4 + 7 = 11. How can the computer solve this problem?
Find the sum of all subsequences? Number of subsequences n (n + 1)/2 = sum (I, I = 1,... n), complexity n ^ 2.
The Code is as follows,
def msum(a): return max([(sum(a[j:i]), (j,i)) for i in range(1,len(a)+1) for j in range(i)])
Is there a simpler way?
Dynamic Planning. If dynamic planning is used, the solution of n is required to expand from the solution of N-1 to the solution of n. If we know the sum of the largest subsequences in n-1 numbers and increase the number to n, what is the sum of the largest subsequences? If you consider the number a [n] That is added, if a [n]> = 0, it can be considered as an increment and, then the maximum subsequence should add this number. But what if the maximum subsequence of the first n-1 number and the corresponding subsequence are not continuous with a [n?
Consider it from another angle. If sum is taken into account for the maximum subsequence of the first n-1 item, and sum <0, the sum of sum with the following number will reduce the sum of the maximum subsequence.
For example, a [j: k] is the optimal subsequence of sequence a [j: I], s is the sum of the optimal subsequence, and t is the sum of a [j: I, consider the next element a [I + 1]. If t + a [I + 1]> = s, then a [j: I + 1] should be the optimal subsequence, s = t + a [I + 1], t = s; however, if t + a [I + 1] <0, then a [j: I + 1] should not be in the most subsequence, because it will reduce and reduce the subsequence, so t = 0, starting from the next element, for a [I + 2: n] to solve new problems. Note that s continues to save the sum of the optimal subsequences.
The Code is as follows,
def msum2(a): bounds, s, t, j = (0,0), -float('infinity'), 0, 0 for i in range(len(a)): t = t + a[i] if t > s: bounds, s = (j, i+1), t if t < 0: t, j = 0, i+1 return (s, bounds)
The code in the reference link was moved by me again. Looking at the written code, I had no motivation to write it.
The time complexity of the dynamic planning algorithm is O (n). You only need to scan the sequence once.
From these two typical problems, we can see the essentials of dynamic planning, turning the big problem into a small problem, and avoiding repeated computation when computing a small problem, from small to large, we need to understand how to expand and what is the most important sub-structure of a sub-problem.
Although I have understood the idea of dynamic planning, there may be various problems when encountering specific problems. Therefore, I would like to list as many dynamic planning questions as possible and give answers. The Dynamic Planning series starts from this article and then provides various examples.