Dynamic Programming Algorithm
definition
Dynamic programming is a method used in mathematics, computer science, and economics to solve complex problems by decomposing the original problem into relatively simple sub-problems. Dynamic programming is often applied to problems with overlapping sub-problems and optimal substructure properties, and dynamic programming methods often consume much less time than naïve solutions.
The basic idea behind dynamic planning is very simple. In general, to solve a given problem, we need to solve its different parts (i.e. sub-problems), and then merge the solution of the sub-problem to derive the solution of the original problem. Often many sub-problems are very similar, for this reason the dynamic programming method attempts to solve only one sub-problem at a time, thus reducing the amount of computation: Once the solution of a given sub-problem has been calculated, it is stored so that the next time the same sub-problem can be solved directly to the table. This practice is particularly useful when the number of repeating sub-issues is exponential as the size of the input increases exponentially.
The most classic question about dynamic programming is the knapsack problem.
Steps
1. Optimal substructure properties. If the solution of the problem is also optimal, we call the problem to have the optimal substructure property (i.e. to satisfy the optimization principle). The optimal Substructure property provides an important clue for solving the problem of dynamic programming algorithm.
2. Overlapping properties of sub-problems. The overlapping nature of sub-problems is that when the problem is solved by a recursive algorithm from top to bottom, the sub-problem is not always a new problem, and some sub-problems are counted repeatedly. The dynamic programming algorithm takes advantage of the overlapping nature of this seed problem, computes only once for each sub-problem, then saves its results in a table, and when it is necessary to calculate the calculated sub-problem again, it simply looks at the result in the table, thus obtaining higher efficiency.
Backpack Algorithm
Simple backpack problem
Each item has only one, and the value of each item combination is 20.
Code
public class Knapsack {public static void main (final String ... args) {int[] arr = new INT[5];
Arr[0] = 11;
ARR[1] = 8;
ARR[2] = 7;
ARR[3] = 5;
ARR[4] = 3;
Knapsack k = new knapsack ();
System.out.println (K.knapsack (arr, 0, 20, 20)); }/** * @param all elements and their values in the ARR backpack * @param start item in the backpack, the item currently being placed * @param the remaining space in the left backpack * @param Total space for sum backpack */public Boolean knapsack (int[] arr, int start, int left, int sum) {if (Arr.length = =
0) {return false; }//Start from the next item in original array if (start = = Arr.length) {int[] Temparr
= new Int[arr.length-1];
for (int i = 0; i < temparr.length; i++) {Temparr[i] = arr[i + 1];
} return knapsack (Temparr, 0, sum, sum); } else if (Arr[start] > left) {return knapsack (arr, START + 1, left, sum); } else if (arr[start] = = left) {for (int i = 0; i < start + 1; i++) {//print the Answe
R out System.out.print (Arr[i] + "\ t");
} return true;
} else {return knapsack (arr, start + 1, Left-arr[start], sum); }
}
}
(Code excerpt from online)