Introduction to algorithms learning notes-Dynamic Planning

Source: Internet
Author: User
Tags define function

This article is reproduced, the original address: http://www.cppblog.com/Fox/archive/2008/05/07/Dynamic_programming.html

When I was studying non-numerical algorithms, I used to know dynamic programming. The following is a translation of dynamic planning on Wikipedia. The graph is also from Wikipedia, if not, please correct us.

There are too many terms in this article, so I have added a small amount of comments in this article.Bold italicNote.

The shortcomings in this article are as follows:Modify at any time, MIT's "Introduction to algorithms" Chapter 15th focuses on dynamic planning.

_____________________________________________________________

Dynamic Planning

In mathematics and computer science,Dynamic PlanningUsed to solve problems that can be decomposed into repeated subqueries (overlapping subproblems,Think about recursive factorial.) With the optimal sub-structure (optimal
Substructure,Think about the shortest path algorithm) (As described below), dynamic planning takes less time than normal algorithms.

In the 1940s S, Richard Bellman first used the concept of dynamic planning to express the process of finding the optimal decision solution through traversal. In 1953, Richard Bellman granted modern significance to dynamic planning, which was incorporated by IEEE into system analysis and engineering. To commemorate bellman's contribution, the core equation of dynamic planning is named bellman's equation, which repeats an optimization problem in recursive form.

In the term dynamic programming, programming is not associated with programming in computer programming, but from mathematical programming ), also called optimization. Therefore, planning refers to the Optimization Strategy for generating activities. For example, preparing a calendar for an exhibition can be called planning.
In this sense, planning means finding a feasible activity plan.

  • Overview

Figure 1Use the optimal sub-structure to find the shortest path: a straight line represents the edge, and a wave line represents the shortest path between the two vertices (not shown by other nodes in the path). A coarse line represents the shortest path from the start point to the end point.

It is not hard to see that the shortest path from start to goal is determined by the shortest path from adjacent nodes of start to goal and the cost from start to adjacent nodes.

The optimal sub-structure can be used to find the optimal solution of the sub-problem of the entire problem. For example, to find the shortest path from a vertex to an endpoint on the graph, you can calculate the shortest path from all adjacent vertices to the endpoint of the vertex and then select the optimal overall path, as shown in figureFigure 1.

Generally, the optimal sub-structure solves the problem through the following three steps:

A) break down the problem into smaller subproblems;

B) Use these three steps recursively to obtain the optimal solution of the subproblem;

C) Use these optimal solutions to construct the optimal solution for the initial problem.

Subproblems are solved by constantly dividing them into smaller subproblems until we can solve them within a constant time.

Figure 2Subproblem of the Fibonacci sequence: Use a directed acyclic graph (DAG, directed acyclic graph) instead of a tree to represent the decomposition of the replay subproblem.

Why is it Dag instead of tree? The answer is: if it is a tree, there will be a lot of repeated computations. The following are some explanations.

A problem can be divided into multiple subproblems, which means that different major problems can be solved through the same subproblems. For example, in the Fibonacci sequence, F3 = F1 + F2 and F4 = F2 + F3 both contain the calculated F2. Since F5 needs to be calculated for F3 and F4, a stupid F5 calculation method may calculate F2 twice or more times. This applies to all replay sub-problems: Stupid practices may waste time for the solution of the best sub-problems that have been solved by repeated computation.

To avoid repeated computation, you can save the obtained sub-problems and reuse them when we want to solve the same sub-problems. This method is called caching instead of storing memorization. Although this word is also suitable,It's too difficult to translate this word. It's just a word that can't be said. It means computing is not computed, and computing is saved.). When we are confident that we will no longer need a solution, we can discard it to save space. In some cases, we can even calculate the solutions to subproblems that will be used in the future in advance.

In summary, dynamic planning and utilization:

1)
Replay Problem

2)
Optimal sub-structure

3) Cache

Dynamic Planning usually adopts one of the following two methods:

Top-down: divide the problem into several subproblems, solve these subproblems, and save the results to avoid repeated calculation. This method combines recursion and caching.

Bottom-up: first solve all possible subproblems, and then use them to construct a larger solution. This method has a slight advantage in saving stack space and reducing the number of function calls. However, it is not so intuitive to find out all sub-problems of a given problem.

To improve the call-by-name mechanism, this mechanism is related to the on-demand call-by-need transfer,Review the rules passed by parameters. Simply put, passing by name allows changing the real parameter value.In some programming languages, the return values of functions are automatically cached in specific parameter sets of functions. Some languages simplify this feature as much as possible (such as scheme and common)
LISP and Perl), some languages also need special extensions (such as C ++,In C ++, value-based and reference-based transmission are used. Therefore, C ++ does not have an automatic cache mechanism and must be implemented by itself. An example of implementation is automatic memoization.
In C ++
). In any case, only the transparent (referentially transparent,Transparent reference means that using expressions, functions themselves, or replacing them with their values in a program has no effect on program results.) Function.

  • Example

1. Fibonacci sequence

Find the nth number in the Fibonacci sequence and implement it directly based on its mathematical definition:

Function fib (N)
If n = 0
Return 0
Else if n = 1
Return 1
Return fib (n-1) + fib (n-2)

If we call fib (5), a call tree is generated for repeated calculation of the same value multiple times:

  1. FIB (5)
  2. FIB (4) + fib (3)
  3. (FIB (3) + fib (2) + (FIB (2) + fib (1 ))
  4. (FIB (2) + fib (1) + (FIB (1) + fib (0) + (FIB (1) + fib (0 )) + fib (1 ))
  5. (FIB (1) + fib (0) + fib (1) + (FIB (1) + fib (0) + (FIB (1) + fib (0) + fib (1 ))

In particular, FIB (2) is calculated three times. In larger cases, more fib values are repeatedly calculated, which will consume exponential time.

Now, suppose we have a simple map object.MCreates a ing for each computed fib and its return values, modifies the above function fib, and uses and constantly updates it.M. The new function will only requireO(N), Rather than the exponential time:

VaR M: = map (0 → 1, 1 → 1)
Function fib (N)
If map M does not contain key N
M [N]: = fib (n-1) + fib (n-2)
Return M [N]

This technology is called cache, which stores the calculated values.Top-downMethod: divide the problem into several subproblems, and then calculate and store the value.

InBottom-upIn this method, we first calculate a smaller fib, and then calculate a larger fib based on it. This method also takes only linear (O(N) Because it containsN-One cycle. However, this method only requires constants (O(1) space, on the contrary,Top-downThe O (N) To store mappings.

Function fib (N)
VaR previusfib: = 0, currentfib: = 1
If n = 0
Return 0
Else if n = 1
Return 1
Repeat n-1 times
VaR newfib: = previusfib + currentfib
Previusfib: = currentfib
Currentfib: = newfib
Return currentfib

In both examples, we only calculate fib (2) once, and then use it to calculate fib (3) and FIB (4), instead of re-computing each time.

2. A balanced 0-1 Matrix

ConsiderationsN*NMatrix assignment: only 0 and 1 can be assigned,NIs an even number, so that each row and column containN/2 zeros andN/2. For example, whenN= 4, two possible solutions are:

+--+
| 0 1 0 1 | 0 0 1 1 |
| 1 0 1 0 | 0 0 1 1 |
| 0 1 0 1 | 1 1 0 0 |
| 1 0 1 0 | 1 1 0 0 |
+--+

Q: for a givenNThe total number of different assignment schemes.

There are at least three possible algorithms to solve this problem: brute force, backtracking, and dynamic programming ). Use the exhaustive method to list all assignment schemes and find solutions that meet the balancing conditions one by one. Because a total of C (N,
N/2) ^NSolution (In a row, the combination of n/2 0 and n/2 1 is C (n, n/2 ), it is equivalent to selecting n/2 locations from N locations to set 0, and the rest is naturally 1), WhenN= 6, the method is almost impossible. The backtracking method first sets some elements in the Matrix to 0 or 1, then checks and assigns values to elements not assigned values in each row and column, make sure that the number of 0 and 1 in each row and column isN/2. The backtracking method is more clever than the exhaustive method, but you still need to traverse all solutions to determine the number of solutions.N= 8, the number of solutions for this question is already as high as 116963796250. Dynamic Planning determines the number of solutions without traversing all solutions (This means that the repetitive Calculation of several sub-problems can be effectively avoided after molecular partitioning.).

It is unexpectedly simple to solve this problem through dynamic planning. Consider that each row contains exactlyN/2 zeros andN/2 of 1K*N(1 <=K<=N). Function f maps the values of each row to a vector.NAn integer pair. The two integers in an integer pair corresponding to each column of the vector indicate the numbers of 0 and 1 already placed below this column. This problem is converted to F ((N/2,N/2 ),(N/2,N/2 ),...,(N/2,N/2) (NParameters or containNElement. The construction process of its sub-problems is as follows:

1) the top line (Row K) With C (N,N/2) value assignment;

2) based on the value assignment of each column in the top row (0 or 1), subtract 1 from the corresponding element value in the corresponding integer pair;

3) if any element in any integer pair is negative, the value assignment is invalid and cannot be correctly solved;

4) otherwiseK*NThe assignment of the top row in the sub-matrix.K=K-1: Calculate the remaining (K-1 )*NSub-matrix value;

5) The basic situation is 1 *NAt this time, the number of solutions for this subproblem is 0 or 1, depending on whether its vector isN/2 (0, 1) andN/2 (1, 0.

For example, in the above two solutions, the vector sequence is:

(2, 2) (2, 2) (2, 2) (2, 2) (2, 2) (2) (2, 2) (2, 2) (2, 2) k = 4
0 1 0 1 0 1 1

(1, 2) (2, 1) (1, 2) (2, 1) (1, 2) (1, 2) (2, 1) (2, 1) k = 3
1 0 1 0 0 0 1 1

(1, 1) (1, 1) (1, 1) (1, 1) (0, 2) (0, 2) (2, 0) (2, 0) k = 2
0 1 0 1 1 0 0

(0, 1) (1, 0) (0, 1) (1, 0) (0, 1) (0, 1) (1, 0) (1, 0) k = 1
1 0 1 0 1 0 0

(0, 0) (0, 0) (0, 0) (0, 0) (0, 0) (0, 0), (0, 0) (0, 0 ))

The significance of dynamic planning is to avoid repeated computation of the same F. Furthermore, although the two F colored above have different vectors, the value of F is the same. Think about why? .

The number of solutions to this problem (sequence a058527 in oeis) is 1, 2, 90,297 200, 116963796250,673 6218287430460752 ,...

The following external links contain the trace Perl source code implementation, as well as the dynamic programming maple and C language implementation.

3. chessboard

ConsiderationsN*NThe chessboard and cost function C (I,J), This function returns the square (I,J) Related costs. Take the 5*5 board as an example:

5 | 6 7 4 7 8
4 | 7 6 1 1 4
3 | 3 5 7 8 2
2 | 2 6 7 0 2
1 | 7 3 5 6 1
-+ -----
| 1 2 3 4 5

We can see that C (1, 3) = 5

Starting from the First Order (that is, the row) of any square in the chessboard, find the shortest path to the last order (to minimize the total cost of all squares that pass through ), it is assumed that only a grid can be moved to the left, right, or vertical corner.

5 |
4 |
3 |
2 | x
1 | o
-+ -----
| 1 2 3 4 5

This problem shows the optimal sub-structure. That is, the global solution of the entire problem depends on the solution of the subproblem. Define function q (I,J), So: Q (I,J) Indicates to arrive at the Square (I,J.

If we can findNQ (I,J) To obtain the shortest path.

Q (I,J) Is a square (I,J) To the following three squares ((I-1, J-1), (I-1, J), (I-1, J + 1)) The lowest cost and C (I,J), For example:

5 |
4 |A
3 |B c d
2 |
1 |
-+ -----
| 1 2 3 4 5

Q (A) = Min (Q (B), Q (C), Q (D) + C (A)

Define Q (I,J:

|-INF.J<1 or
J> N
Q (I,J) =-+-C (I,J) I = 1
|-Min (Q (I-1,J-1), Q (I-1,J), Q (I-1,J+ 1) + C (I,J) Otherwise.

The first line of the equation is to ensure that recursion can exit (only one recursive function is called when processing the boundary ). The second row is the value of the first order, which serves as the starting point for calculation. The recursion in the third row is an important part of the algorithm.A,B,C,DSimilar. From this definition, we can directly calculate Q (I,J. In the following pseudocode,NIndicates the dimension of the board. C (I,J) Is the cost function. Min () returns the minimum value of a set of values:

Function mincost (I, j)
If j <1 or j> N
Return infinity
Else if I = 1
Return C (I, j)
Else
Return min (mincost (I-1, J-1), mincost (I-1, J), mincost (I-1, J + 1) + C (I, j)

Note that mincost only calculates the path cost, rather than the actual path. Similar to the number of Fibonacci, this method is slow because it takes a lot of time to repeat the same shortest path. However, if the bottom-up method is used, replacing the mincost function with the two-dimensional array Q [I, j] will make the calculation process much faster. Why should we do this? Selecting the Save value is much easier than using the function to calculate the same path.

We also need to know the actual path. Path problem. We can solve it through another former array P [I, j. This array is used to describe the path. The Code is as follows:

Function computeshortestpatharrays ()
For X from 1 to n
Q [1, x]: = C (1, x)
For y from 1 to n
Q [y, 0]: = infinity
Q [y, n + 1]: = infinity
For y from 2 to n
For X from 1 to n
M: = min (Q [Y-1, x-1], Q [Y-1, X], Q [Y-1, x + 1])
Q [y, X]: = m + C (Y, X)
If M = Q [Y-1, x-1]
P [y, X]: =-1
Else if M = Q [Y-1, X]
P [y, X]: = 0
Else
P [y, X]: = 1

The remaining minimum and output values are relatively simple:

Function computeshortestpath ()
Computeshortestpatharrays ()
Minindex: = 1
Min: = Q [N, 1]
For I from 2 to n
If Q [N, I] <min
Minindex: = I
Min: = Q [N, I]
Printpath (n, minindex)

Function printpath (Y, X)
Print (X)
Print ("<-")
If y = 2
Print (x + P [y, X])
Else
Printpath (Y-1, x + P [y, X])

4. Sequence Comparison

Sequence comparison is an important application of dynamic planning. Sequence Comparison usually involves sequence conversion using an editing operation (replacement, insertion, and deletion of a element. Each operation corresponds to different costs, and the goal is to find the lowest cost of the editing sequence.

We can naturally think of using recursion to solve this problem.AToBThe optimal editing is implemented by one of the following measures:

InsertBThe first characterAAndBFor optimal comparison;

DeleteAThe first characterAAndBOptimal comparison;

UseBReplace the first characterAThe first characterAAndBPerform optimal comparison.

The local comparison can be expressed in the list of matrices (I,J) Indicates a [1 ..I] To B [1 ..J] The cost of optimal comparison. Unit (I,J. For different implementation algorithms of sequence comparison, see Smith-Waterman and Needleman-Wunsch.

I am not familiar with the topic of sequence comparison, and I cannot talk about more things. If you are familiar with it, you can introduce it.

  • Applying Dynamic Planning Algorithms

1) Many string operation algorithms, such as the longest public subcolumn, the longest incrementing subcolumn, and the longest Public String;

2) applying dynamic planning to tree decomposition of graphs can effectively solve many graph-related algorithm problems, such as generating trees of bounded tree wide graphs;

3) determine whether or not to generate the cocke-younger-kasami (cyk) algorithm of the given string through a specific context-independent grammar;

4) Use of converting and refuting tables in computer chess;

5) Viterbi algorithm (used for implicit Markov model );

6) Earley algorithm (a type of chart analyzer );

7)
Needleman-Wunsch and other algorithms used in bioinformatics, including sequence comparison, structure comparison, and RNA structure prediction;

8)
Levenshtein distance (editing distance );

9)
Shortest path algorithm;

10)
Optimize the multiplication order of the chain matrix;

11)
A pseudo-polynomial time Algorithm for subset summation, knapsack problems, and splitting problems;

12) Dynamic Time normalization Algorithm for calculating the Global Distance between two time series;

13) The Selinger (also known as system R) algorithm for relational database query optimization;

14) de boor Algorithm for evaluating B-spline curves;

15) Duckworth-Lewis method used to solve the problem of cricket disruption;

16) Value Iteration Method for Solving the Markov decision-making process;

17) some image edge selection methods, such as magnet selection tools in Photoshop;

18)
Interval scheduling;

19) automatic line feed;

20)
Traveling Salesman Problem (It is also known as the postman problem or the goods carrier problem.);

21)
Piecewise least square method;

22)
Music Information Retrieval and tracking.

Most of these algorithms have never been used, or even have problems with Terms Translation. As this article mainly focuses on introducing dynamic planning, it is a rush without verification.

  • Related

1) Bellman equation

2)
Markov Decision Process

3) Greedy Algorithm

  • Reference
  • Adda, Jerome, and Cooper, Russell, 2003.Dynamic economics.MIT Press. An accessible introduction to dynamic programming in economics. The Link contains sample programs.
  • Richard Bellman, 1957,Dynamic Programming, Princeton University Press. Dover paperback edition (2003 ),
    The ISBN 0486428095.
  • Bertsekas, D. P., 2000.Dynamic Programming and Optimal Control, Vols. 1 & 2, 2nd ed. Athena scientific.
    ISBN 1-886529-09-4.
  • Thomas H. cormen,
    Charles E. leiserson,
    Ronald L. Rivest, and
    Clifford Stein, 2001.
    Introduction to Algorithms
    , 2nd ed. MIT Press & McGraw-Hill.
    ISBN 0-262-03293-7. especially pp. 323-69.
  • Giegerich, R., Meyer, C., and Steffen, P., 2004, "a discipline of dynamic programming over sequence data ,"
    Science of computer programming 51215-263.
  • Nancy stokey, and
    Robert E. Lucas,
    Edward Prescott, 1989.Recursive methods in economics Dynamics. Harvard Univ. Press.
  • S. P. meyn, 2007.
    Control Techniques for complex networks, Cambridge University Press, 2007.

    • External link
    • LS-DYNA, a declarative programming language for dynamic programming algorithms
    • Wagner, David B., 1995, "dynamic programming." An introductory article on Dynamic Programming in
      Mathematica.
    • Ohio State University: CIS 680: class notes on dynamic programming, by Eitan M. gurari
    • A tutorial on Dynamic Programming
    • MIT course on algorithms-events des a video lecture on DP along with lecture notes -- see lecture 15.
    • More DP notes
    • King, Ian, 2002 (1987), "a simple introduction to dynamic programming in macroeconomical mic models." An Introduction to dynamic programming as an important tool
      In economics theory.
    • Dynamic Programming: from novice to advanced a topcoder.com article by Dumitru on Dynamic Programming
    • Algebraic Dynamic Programming-a formalized framework for dynamic programming, including
      Entry-level course to DP, University of Bielefeld
    • Dreyfus, Stuart, "Richard Bellman on the birth of dynamic programming ."
    • Dynamic Programming Tutorial
    • An Introduction to Dynamic Programming

    _____________________________________________________________

    Dynamic Planning is just a translation, which will be used to write dynamic planning based on actual problems.

  • Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.