First, the **basic idea of dynamic planning**

In general, as long as the problem can be divided into smaller sub-problems, and the original problem of the optimal solution contains the sub-problem of the optimal solution, you can consider the dynamic programming solution. The essence of dynamic programming is to divide and conquer ideas and solve redundancy, so dynamic programming is a kind of algorithm strategy to decompose problem instances into smaller, similar sub-problems, and to save the solution of sub-problems and avoid the calculation of repetitive sub-problems to solve the optimization problem. It can be concluded that the dynamic programming method is similar to the Division method and the greedy method, and they all generalize the problem examples into smaller, similar sub-problems, and produce a global optimal solution by solving sub-problems. The current selection of greedy methods may depend on all the choices that have been made, but do not depend on the choices and sub-issues that need to be made. So greedy method from top to bottom, one step at a point to make the greedy choice, and the sub-rule of the sub-problem is independent (that is, does not include the common sub-sub-problem), so once the solution of the sub-problem can be obtained recursively, the solution of the sub-problem will be merged into the solution of the problem. However, if the current selection may be dependent on the solution of sub-problem, it is difficult to achieve the global optimal solution through local greedy strategy, if the sub-problems are not independent, then divide and conquer the law to do a lot of unnecessary work, repeat the problem of public sub-solution. The solution to these problems is the use of dynamic programming. This method is mainly applied to optimization problems, there are many possible solutions to this kind of problem, each solution has a value, and dynamic programming finds out the solution of the optimal (maximum or minimum) value. If there are several solutions to the optimal value, it takes only one of them. In the process of solving the problem, the method also achieves the global optimal solution by solving the local sub-problems, but unlike the divide-and-conquer method and the greedy method, the dynamic programming allows these sub-problems to be independent and allows them to choose through the solution of their own sub-problem, which only solves each sub-problem once and saves the result. Avoid repeating calculations every time you encounter them. Therefore, the problem of dynamic programming is characterized by a significant number of repetitions in the sub-problem tree. The key of the dynamic programming method is that, for the recurrence of sub-problems, only in the first encounter when the solution, and save the answer, so that later encountered when the direct reference, do not have to re-solve.

A large number of repetitions are present in the sub-problem.

The key to the dynamic programming approach is that

For recurring sub-problems,

Solved only at the first encounter,

and save the answer.

To make a direct reference to the next time you meet

No

Must be re-solved.

A large number of repetitions are present in the sub-problem.

The key to the dynamic programming approach is that

For recurring sub-problems,

Solved only at the first encounter,

and save the answer.

To make a direct reference to the next time you meet

No

Must be re-solved.

A large number of repetitions are present in the sub-problem.

The key to the dynamic programming approach is that

For recurring sub-problems,

Solved only at the first encounter,

and save the answer.

To make a direct reference to the next time you meet

No

Must be re-solved.

Ii. **Examples of dynamic programming**

**1, 0-1 knapsack problem**

**Problem Description**

Suppose we have n items, numbered,... n, respectively. Which number is the item value VI of I, its weight is wi. To simplify the problem, assume that both value and weight are integers. Now suppose we have a backpack that can carry the weight is W. Now, we want to put these items in the backpack to maximize the value of the items in the bag, so how do we choose the items to be loaded? The problem structure is shown in the following figure:

**Preliminary Analysis**

At first it was a little bit too good to start with this question. A pile of goods, each has a certain quality and value, we can load the total weight there is a limit, how to install so that the value of the largest. For these n items, each item we may choose, or may not choose, then we may have a total of 2^n combinations of options. If we use this method to calculate, then the overall time complexity will reach the index level, certainly not feasible.

Now let's change the idea. Since each item has a price and weight, we prefer to choose whether those units are the highest price possible. For example, in the image below, we have 3 items, their weights and prices are ten, three, 60, 100, 120

Then according to the unit price to calculate, we should first pick is the price of 60 elements, after selecting it, the backpack is left 50-10 = 40kg. To continue with the previous selection, we should pick the element with a price of 100, so that the total value in the backpack is 60 + 100 = 160. The weight is 30, leaving the 20kg. Since the back need to pick the goods for the 30kg already exceeded the capacity of the backpack. The most we can choose to do in this way is the front two items. The following figure:

According to our previous expectations, this choice should be the greatest value. But because of a backpack weight limit, here only used 30kg, there is still 20kg wasted. Would that be the best option? Let's look at all the options:

Unfortunately, in the case of these choices, the choices we have before are the least valuable. The choice of weights of 20kg and 30kg respectively brought the greatest value. It seems that the way we have just chosen the best unit price is not feasible.

**Dynamic planning solves problems**

Since the previous two methods are not feasible, let's see if there is any other way. Let's look at the problem again. We need to select several of the N elements to form an optimal solution, assuming K. So for this K-element a1, A2, ... AK, the combination of their composition must meet the total weight <= backpack weight limit, and their value is necessarily the largest. Because they are the best choice we assume, the value should be the biggest. Suppose AK is the last item we put in the previous order. Its weight is wk, and its value is VK. Since the K element we chose earlier constituted the optimal choice, if we took the AK item, it would cover the weight range of 0-(W-WK) for k-1 items. Suppose W is the amount of load that the backpack allows. Assuming that the final value is V, the remainder of the item constitutes a value of V-VK. Does the rest of the K-1 elements constitute an optimal solution to this w-wk?

We can use contradiction to deduce. Assuming that the item is taken away from AK, the remaining items do not constitute the best value option for the W-WK weight range. Then we must have another k-1 element, and they are more valuable within the W-WK weight range. If this is the case, we use this k-1 item plus the k, the value that they constitute in the final W weight range is optimal. Wouldn't this be the best contradiction with the K-elements we assumed before? So we can be sure that the last element in the K element is removed, and the remaining elements in the front still form the best solution.

Now we have a basic recursive relationship through the preceding reasoning, that is, the sub-solution set of an optimal solution is also optimal. But how do we get the optimal solution? We look at it this way. Suppose we define a function c[i, W] to represent the optimal solution we can select to the first element, and to limit the total weight to W. Then the optimal solution either contains I or not, which is certainly one of the two cases. If we select the item I, then in fact the optimal solution is c[i-1, W-wi] + VI. And if we do not select item I, the optimal solution is C[i-1, W]. In this case, in fact, whether or not to take the item I, we only need to compare these two cases, which result value is not the best one.

In the relationship discussed earlier, one of the things we need to consider is whether our optimal solution is based on the total weight of the selected item I or the W range, if it is exceeded. We certainly cannot choose it, this is the same as C[i-1, W].

In addition, for the initial situation. Obviously C[0, W] no matter w is how much, must be 0. Because it means that we do not have a choice of things. C[i, 0] Also, when our total weight limit is 0 o'clock, the value is definitely 0.

So, based on the 3 parts we discussed earlier, we can get a recursive formula like this:

With this relationship, we can further consider the implementation of the Code. We have such a recursive relationship in which the result of the subsequent function is actually dependent on the previous result. We can find the results by simply following the most basic optimal conditions, and then stepping forward to the back.

Let's take a look at the specifics of the implementation. This set of items has value and weight, we can define two arrays int[] V, int[] W. V[i] denotes the value of the item I, W[i] represents the weight of the article I. In order to represent C[i, W], we can use a matrix of int[i][w]. The maximum value for I is the number of items, while W represents the maximum weight limit. According to the preceding recursive relationship, C[i][0] and c[0][w] are all 0. And the end result we ask for is c[n][w]. So the matrix we actually created is (n + 1) x (W + 1) specification.

Specific data: number of items n = 5, item weight W[n] = {0,2,2,6,5,4}, item value v[n] = {0,6,3,5,4,6},w = 10.

**Code Implementation**

**[CPP]** View Plain copy print? #include <iostream> #define &NBSP;&NBSP;MAX (a) (a) > (b) a:b) using namespace std; int weight[] = {0,2,2,6,5,4}; //Weight of items value of int value[] = {0,6,3,5,4,6}; //items int table[6][11]; The //store represents the best solution bool frist_flag = that we can choose to do when the total weight of W is limited to the first element. true; //is primarily initialized with table[][] = value[1] int main (void) { for (int i = 1;i <= 5;i++) { for (int j = 1;j <=10;j++) { if (Frist_flag == true) { if (Weight[1] &NBSP;<=&NBSP;J) { table[i][j] = Value[1]; frist_ flag = false; } } else { if (WEIGHT[I]&NBSP;>&NBSP;J) { table[i][j] = table[i-1][j]; } else { &nbsP; table[i][j] = max (table[ I-1][J-WEIGHT[I]]+VALUE[I],TABLE[I-1][J]); } } } } cout << "maxvalue = " << table[ 5][10] << endl; return 0; }

#include <iostream>

#define MAX (A, a) ((a) > (b) a:b) using namespace std; int weight[] = {0,2,2,6,5,4}; The weight of the item int value[] = {0,6,3,5,4,6}; Value of goods int table[6][11]; The optimal solution bool Frist_flag = True, which is represented to the first element, in the case of limiting the total weight to W; Mainly initialize table[][] = value[1] int main (void) {for (int i = 1;i <= 5;i++) {for (int j = 1;j <=10;j++) {if (FRIST_FL AG = = true) {if (weight[1] <= j) {Table[i][j] = value[1]; Frist_flag = false; }} else {if (Weight[i] > J) {table[i][j] = table[i-1][j];} else {table[i][j] = max (table[i-1][j-weight[i]]+value[i ],TABLE[I-1][J]); }}}} cout << "MaxValue =" << table[5][10] << Endl; return 0; }

**2. Maximum sub-arrays and problems**

**problem**

A one-dimensional array with n integer elements (a[0],a[1],... A[n-1]), this array has a number of sub-arrays, and the maximum value of the subarray is obtained. Note: The sub-array must be contiguous, do not need to return the exact location of the Subarray, the array contains: positive, negative, zero integers, sub-arrays cannot be empty.

For example:

int a[5] = { -1,2,3,-4,2};

The qualifying sub-array is 2, 3, i.e. the answer is 5;

Poor lifting method:

**[CPP]** View Plain copy print? Int maxsubstringsum (int *a,int n) { int maxsum = a[0]; int sum = 0; for (int i = 0;i < n;i++) { sum = 0; for (int j = i;j < n;j++) { sum += A[j]; maxsum = max (maxsum,sum); } } &NBSP;&NBSP;&NBSP;&NBSP;RETURN&NBSp maxsum; }

int maxsubstringsum (int *a,int n)
{
int maxsum = a[0];
int sum = 0;
for (int i = 0;i < n;i++)
{
sum = 0;
for (int j = I;j < n;j++)
{
sum + = a[j];
Maxsum = max (maxsum,sum);
}
}
return maxsum;
}

The most direct of the poor, of course, time-consuming is more, the complexity of the O (n^2);

Further analysis

We use the exhaustive method, although it is easy to understand, but its time complexity is very large, we try to optimize. Now consider the first element of the array a[0], and the largest sub-array (A[i],...... A[J]), has the following three relationships:

1) i = j = 0; A[0] itself and the largest sub-array

2) J > i = 0; and the largest sub-array begins with a[0]

3) i > 0; A[0] does not relate to the maximum Subarray

As you can see from the above 3, you can convert a large problem (an array of N elements) into a smaller problem (an array of N-1 elements). Assuming already know (A[1],...... A[n-1]) and the largest subarray and maxsum[1], and know, (A[1],...... A[n-1]) and the largest sub-array containing a[1] is tempmaxsum[1]. We can put (A[0],...... A[N-1]) Sum maximum sub-array problem converted to, maxsum[0] = max{a[0],a[0]+tempmaxsum[1],maxsum[1]}.

Code implementation:

** [CPP] ** view plain copy print int maxsubstringsum (int *a,int n) { int MaxSum = A[0]; int TempMaxSum = A[0]; for (int i = 1;i < n;i++) { tempmaxsum = max (A[i],tempmaxsum + a[i]); maxsum = max (maxsum,tempmaxsum); } return MaxSum; }

int maxsubstringsum (int *a,int n)
{
int maxsum = a[0];
int tempmaxsum = a[0];
for (int i = 1;i < n;i++)
{
tempmaxsum = max (a[i],tempmaxsum + a[i]);
Maxsum = max (maxsum,tempmaxsum);
}
return maxsum;
}

To be Continued ...