Dynamic programming is an algorithm paradigm that divides complex problems into sub-problems and saves the results of sub-problems to avoid duplication of calculations. If a problem has the following two properties, it is recommended to use dynamic planning to solve it.
1 Overlapping sub-problem (overlapping subproblems)
2 optimal sub-structure (Optimal substructure)
1 Overlapping sub-problems
Similar to the division method, dynamic programming combines the solutions of sub-problems. When you need to use a solution to a sub-problem several times, you should consider using dynamic planning. In the dynamic programming algorithm, the solution of sub-problem is stored in a table, which avoids the solution of the iterative calculation of sub-problems. Therefore, when the problem encountered is not overlapping sub-problems, then the result of the sub-problem table will be meaningless, because we do not need to use this result, it is clear that such a situation, dynamic planning will no longer apply. For example, Binary Search has no overlapping sub-problems. Observing the recursive procedure of the following Fibonacci Numbers, it will be found that many overlapping (common) sub-problems are repeated.
/* Simple recursive program for FIBONACCI numbers */int fib (int n) { if (n <= 1) return n; return fib (n-1) + fib (n-2);}
The recursive tree that executes the FIB (5) is as follows:
We can observe that the FIB (3) was called 2 times. We can save the results of the FIB (3), and then use the saved results, rather than counting them again, when we need them again next time. There are two ways to save a solution to a sub-problem:
1 memoization (Top down)
2 Tabulation (Bottom up)
1 memory (memoization)--top down
The memoized program has made some minor changes on the basis of its recursive version: before calculating the solution of a sub-problem, look up the table first. We can use the NIL value to initialize a lookup table, and whenever we need a solution to a sub-problem, we look at the table first (looks into the lookup table). Our solution to the sub-problem has previously been calculated in the table, then we return directly to the solution, otherwise we calculate the solution of the sub-problem and save the computed solution in the lookup table for the next reuse.
Here is a program that uses the memory Fibonacci number:
/* memoized version for nth Fibonacci number */#include <stdio.h> #define Nil-1#define MAX int Lookup[max]; /* Function to initialize NIL values in lookup table */void _initialize () { int i; for (i = 0; i < MAX; i++) lookup[i] = NIL;}/* function for nth Fibonacci number */int fib (int n) { if (Lookup[n] = = NIL) { if (n <= 1) lookup[n] = n; else Lookup[n] = fib (n-1) + fib (n-2); } return lookup[n];} int main () { int n = +; _initialize (); printf ("Fibonacci number is%d", FIB (n)); GetChar (); return 0;}
2 watchmaking (tabulation)--bottom up
The tabulated program, which builds a lookup table from the bottom up, returns the last record in the table.
To see the program, the same is Fibonacci number:
/* tabulated version */#include <stdio.h>int fib (int n) { int f[n+1]; int i; F[0] = 0; F[1] = 1; for (i = 2; I <= n; i++) f[i] = f[i-1] + f[i-2]; return f[n];} int main () { int n = 9; printf ("Fibonacci number is%d", FIB (n)); GetChar (); return 0;}
Memory or tabulation can be used to preserve the solution of sub-problems. In the memory version, we only add records to the lookup table when needed, and in the tab version, all records are added sequentially, starting with the first record. Unlike a tab version, a memory version of a program does not need to add all records to the lookup table. For example, the LCS problem memory program does not need to add all the records.
Dynamic Programming | Set 1 (overlapping Subproblems property)