Common algorithms-Dynamic Programming

Source: Internet
Author: User

 

Complex problems often occur. Instead of simply breaking down them into several subproblems, they may break down a series of subproblems. Simply resolve a large problem into a sub-problem, and combine the sub-problem solution to export the solution of the big problem. the time consumed for solving the problem increases in a power series according to the scale of the problem.

To reduce the time required to repeatedly find the same subproblem, an array is introduced, no matter whether they are useful for the final solution or not, to resolve all subproblems in the array, this is the basic method used by dynamic programming. The following describes how to use the dynamic planning method with examples.

[Problem] calculates the longest common character subsequence of a two-character sequence.

Problem description: The subsequence of a character sequence is a character sequence formed by removing a number of characters (either one or not) from a given Character Sequence at Will (not necessarily consecutive. Make the given character sequence X = "x0, X1 ,..., Xm-1 ", sequence y =" y0, Y1 ,..., Yk-1 is a subsequence of X, there is a strictly incrementing subscript sequence of x <I0, i1 ,..., Ik-1>, so that all J = 0, 1 ,..., K-1 with Xij = YJ. For example, x = "abcbdab" and Y = "bcdb" are subsequences of X.

Given two sequences A and B, the sequence Z is the common subsequence of A and B, that is, z is the same as the subsequence of A and B. The problem requires that the longest common subsequences of two sequences A and B are known.

For example, you can enumerate all sub-sequences of A, check whether they are B Sub-sequences one by one, record the detected sub-sequences at any time, and finally obtain the longest common sub-sequence. This method is too time-consuming and cannot be obtained.

Consider how to break down the longest common subsequence into sub-problems, set a = "A0, A1 ,..., Am-1 ", B =" B0, B1 ,..., Bm-1 ", and z =" z0, Z1 ,..., Zk-1 "is their longest common subsequence. It is not hard to prove that it has the following features:

(1) If am-1 = bn-1, then zk-1 = Am-1 = bn-1, and "z0, Z1 ,..., Zk-2 "is" A0, A1 ,..., Am-2 "and" B0, B1 ,..., A Longest Common subsequence of bn-2;

(2) If am-1! = Bn-1, if zk-1! = Am-1, containing "z0, Z1 ,..., Zk-1 "is" A0, A1 ,..., Am-2 "and" B0, B1 ,..., A Longest Common subsequence of bn-1;

(3) If am-1! = Bn-1, if zk-1! = Bn-1, contains "z0, Z1 ,..., Zk-1 "is" A0, A1 ,..., Am-1 "and" B0, B1 ,..., A Longest Common subsequence of bn-2.

In this way, in the search for a and B Public subsequences, if there is am-1 = bn-1, then further solve a subproblem, find "A0, A1 ,..., Am-2 "and" B0, B1 ,..., Bm-2 "is a Longest Common subsequence; If am-1! = Bn-1, it is to solve two sub-problems, find out "A0, A1 ,..., Am-2 "and" B0, B1 ,..., A Longest Common suborder column of the bn-1 and find the "A0, A1 ,..., Am-1 "and" B0, B1 ,..., The longest common subsequence of bn-2, and the elders of the two are used as the longest common subsequence of A and B.

Define c [J] as the sequence "A0, A1 ,..., AI-2 and B0, B1 ,..., The length of the longest common subsequence of bj-1, calculated C [J] can be recursively expressed as follows:

(1) c [J] = 0 if I = 0 or J = 0;

(2) c [J] = C [I-1] [J-1] + 1 if I, j> 0, and a [I-1] = B [J-1];

(3) c [J] = max (C [J-1], C [I-1] [J]) if I, j> 0, and a [I-1]! = B [J-1].

According to this formula, you can write the length function for calculating the longest common subsequence of two sequences. Since the generation of C [J] depends only on C [I-1] [J-1], C [I-1] [J] and C [J-1], therefore, we can track the generation process of C [J] from C [m] [N] and reverse construct the longest common subsequence. For details, see program.

# Include <stdio. h>

# Include <string. h>

# Define n 100

Char A [n], B [N], STR [N];

 

Int lcs_len (char * a, char * B, int C [] [N])

{Int M = strlen (A), n = strlen (B), I, J;

For (I = 0; I <= m; I ++) C [0] = 0;

For (I = 0; I <= N; I ++) C [0] = 0;

For (I = 1; I <= m; I ++)

For (j = 1; j <= m; j ++)

If (A [I-1] = B [J-1])

C [J] = C [I-1] [J-1] + 1;

Else if (C [I-1] [J]> = C [J-1])

C [J] = C [I-1] [J];

Else

C [J] = C [J-1];

Return C [m] [N];

}

 

Char * buile_lcs (char s [], char * a, char * B)

{Int K, I = strlen (A), j = strlen (B );

K = lcs_len (A, B, C );

S [k] = '/0 ';

While (k> 0)

If (C [J] = C [I-1] [J]) I --;

Else if (C [J] = C [J-1]) j --;

Else {s [-- K] = A [I-1];

I --; j --;

}

Return S;

}

 

Void main ()

{Printf ("enter two string (<% d )! /N ", N );

Scanf ("% S % s", a, B );

Printf ("LCS = % s/n", build_lcs (STR, a, B ));

}

1. Conditions for Dynamic Planning

Any method of thinking has certain limitations. If it exceeds the specific conditions, it will lose its function. Similarly, dynamic planning is not omnipotent. The problem of dynamic planning must meet the Optimization Principle and have no aftereffect.

(1) Optimization Principle (optimal sub-structure)

The optimization principle can be elaborated as follows: an optimization strategy has this nature. Regardless of the past status and decision-making, the remaining decisions must constitute the optimal strategy for the status formed by the previous decision. In short, the sub-strategy of an optimization policy is always optimal. A problem that satisfies the optimization principle is also known as its optimal sub-structure.

 

Figure 2

In Example 2, if route I and j are the optimal routes from A to C, then according to the Optimization Principle, route J must be the optimal route from B to C. This proof can be used to prove that, assuming that another path j 'is the optimal path from B to C, the routes from A to C take I and j', which are better than those from I and j, and are in conflict. This proves that j 'must be the optimal path from B to C.

The optimization principle is the basis of dynamic planning. If any problem arises, it is impossible to use the dynamic planning method for calculation without the support of the optimization principle. The basic equation of Dynamic Planning derived from the optimization principle is the basic method to solve all dynamic planning problems.

(2) undirected

After the stages are arranged in a certain order, for a given stage status, the status of the previous stages cannot directly affect its future decision-making, but can only pass through the current status. In other words, each State is a complete summary of past history. This is also called "no-effect.

(3) Overlapping subproblems

The key to the dynamic planning algorithm is to solve redundancy, which is the fundamental purpose of the dynamic planning algorithm. Dynamic Planning is essentially a technology that uses space for time. in the implementation process, it has to store various states in the Process of generation. Therefore, its spatial complexity is greater than that of other algorithms. The dynamic planning algorithm is selected because the dynamic planning algorithm can afford it in space, while the search algorithm cannot afford it in time.

Therefore, there is another notable feature that can be solved through dynamic planning: overlapping subproblems. This nature is not a necessary condition for dynamic planning. However, if this nature cannot be met, dynamic planning algorithms do not have advantages over other algorithms.

2. Basic Idea of Dynamic Planning

The previous article mainly introduced some theoretical basis for dynamic planning. We refer to the above-mentioned dynamic planning with obvious stage division and state transition equations as standard dynamic planning, this standard dynamic planning is derived from the study of multi-stage decision-making problems. It has a strict mathematical form and is suitable for theoretical analysis. In practical application, the division of stages is not obvious for many problems. In this case, it is difficult to divide the phase method intentionally. Generally, as long as the problem can be divided into smaller sub-problems, and the optimal solution of the original problem contains the optimal solution of the sub-problem (that is, to meet the optimal sub-principle ), you can consider using dynamic planning.

The essence of dynamic planning is to divide governance ideas and solve redundancy. Therefore, dynamic planning is a way to break down problem instances into smaller and similar subproblems, it also stores the solutions of subproblems to avoid repeated computing subproblems, so as to solve the optimization problem.

It can be seen that the dynamic programming method is similar to the divide and conquer method and greedy method. They all classify the problem instances into smaller and similar subproblems and generate a global optimal solution by solving subproblems. The current selection of the greedy method may depend on all the choices that have been made, but it does not depend on the choices and sub-problems to be made. Therefore, the greedy method moves from top to bottom and makes greedy choices step by step. The sub-Problems in the divide and control method are independent (that is, they do not include public sub-problems ), therefore, after recursively finding the solutions to each subproblem, you can combine the solutions of the subproblem into the solution of the problem from the bottom up. However, if the current selection may depend on the solution of the sub-problem, it is difficult to achieve the global optimal solution through local greedy strategy; If the sub-problems are not independent, the division and Control Law should do a lot of unnecessary work to solve public subproblems repeatedly.

The solution to the above problems is to use dynamic planning. This method is mainly applied to optimization problems. There are multiple possible solutions for such problems. Each solution has a value, and dynamic planning finds the optimal (maximum or minimum) value solution. If there are several solutions that take the optimal value, it takes only one of them. In the process of solving a local subproblem, This method also achieves the global optimal solution by solving the local subproblem. However, unlike the divide and conquer method and greedy method, dynamic planning allows these subproblems to be independent, (that is, each subproblem may contain a public subproblem) It is also allowed to make a choice through self-body solution. This method only resolves each subproblem once and saves the results, avoid Repeated Computation every time you encounter it.

Therefore, the problem addressed by the dynamic programming method has a significant feature, that is, the subproblem in its corresponding subproblem tree shows a large number of duplicates. The key to the dynamic programming method is to solve repeated subproblems only when the first problem is encountered, and save the answers so that they can be referenced directly in the future without having to be solved again.

3. basic steps of the Dynamic Planning Algorithm

To design a standard dynamic planning algorithm, follow these steps:

(1) division stage: the problem is divided into several stages based on the temporal or spatial characteristics of the problem. Note that these stages must be ordered or sortable (I .e. non-backward), otherwise the problem cannot be solved using dynamic planning.

(2) Select status: Indicate the various objective situations in which the problem develops to various stages in different States. Of course, the selection of status must be ineffective.

(3) determine the decision and write the state transition equation: the reason for putting these two steps together is that the decision-making and state transfer have a natural relationship, status transition is to export the status of the current stage based on the status and decision of the previous stage. Therefore, if we confirm the decision, the state transition equation will be written. But in fact, we often do this in turn, and determine the decision based on the relationship between the states of the adjacent two segments.

(4) write the planning equation (including the boundary condition): the basic equation of dynamic planning is a general formal expression of the planning equation.

Generally, this step is relatively simple as long as the phases, statuses, decisions, and status transfer are determined. The main difficulty of dynamic planning lies in the theoretical design. Once the design is completed, the implementation part will be very simple. Based on the basic equation of dynamic planning, the optimal value can be calculated recursively, but it is generally changed to recursive calculation. The general implementation framework is as follows:

Basic Framework of Standard Dynamic Planning

1. initialize FN + 1 (xn + 1); {boundary condition}

For K: = n downto 1 do

For each XK, XK, do

For every UK, UK (XK) Do

Begin

5. fk (XK): = an extreme value; {∞ or-∞}

6. XK + 1: = TK (XK, UK); {state transition equation}

7. T: = PHI (FK + 1 (XK + 1), VK (XK, UK); {basic equation (9}

If t is better than FK (XK) Then FK (XK): = T; {calculate the optimal value of FK (XK}

End;

9. T: = an extreme value; {∞ or-∞}

For each xx-x1do

11. If F1 (X1) is better than t then T: = F1 (X1); {obtain the optimal index according to the 10 formula}

12. Output T;

However, in practice, dynamic planning is often designed without explicitly following the above steps, but with the following steps:

(1) analyze the nature of the optimal solution and characterize its structural features.

(2) recursively define the optimal value.

(3) Use the bottom-up mode or top-down memory method (memorandum method) to calculate the optimal value.

(4) construct an optimal solution based on the information obtained when the optimal value is calculated.

STEP (1 )~ (3) It is the basic step of the dynamic planning algorithm. If you only need to obtain the optimal value, step (4) can be omitted. If you need to find an optimal solution of the problem, you must perform step (4 ). At this time, when calculating the optimal value in step (3), more information is usually required to be recorded so that in step (4), according to the recorded information, quickly construct an optimal solution.

 

[Problem] The problem of optimal triangle division for Convex Polygon

Problem description: a polygon is a linear closed curve on a piecewise plane. That is to say, a polygon is composed of a series of straight lines connected at the beginning and end. The straight lines that make up a polygon are called the edges of the polygon. The point at which a polygon connects to two edges is called the vertex of a polygon. If there is no common point between the edges of a polygon except the vertex, the polygon is called a simple polygon. A simple polygon divides the plane into three parts: All Points enclosed in the polygon form the interior of the polygon, And the polygon itself form the boundary of the polyon; the remaining points on the plane form the exterior of the polygon. When a simple polygon and its interior form a closed convex set, the polygon is called a convex polygon. That is to say, all vertices in the linear segment connected to any two points on the convex polygon boundary or any other two points inside the convex polygon are on the interior or boundary of the polygon.

Generally, a convex polygon is represented by a counter-clockwise sequence of the polygon vertex, that is, P = <v0, V1 ,..., Vn-1> indicates that there are n sides v0v1, v1v2 ,..., A convex polygon of the vn-1vn where V0 = Vn is agreed.

If VI and vj are two vertices not adjacent to a polygon, the line segment vivj is called a chord of a polygon. The string splits the Polygon into two convex sub-multilateral shapes <VI, vi + 1 ,..., VJ> and <VJ, vj + 1 ,..., VI>. The triangle division of a polygon is a set t that splits a Polygon into a string of a triangle that does not overlap with each other. Figure 1 shows two different triangles of a convex polygon.

 

(A) (B)

Figure 1 two different triangles of a convex polygon

In a triangle division t of a convex polygon P, each string does not intersect and the number of strings has reached the maximum. That is, any string of P that is not in T must be overlapped with a string of T. In the triangle scratch of a convex polygon with n vertices, there are exactly n-3 strings and N-2 triangles.

The problem of the optimal triangle division of a convex polygon is: Given a convex polygon P = <v0, V1 ,..., Vn-1> and Weight Function ω defined on a triangle consisting of edges and chords of a polygon. Determine a triangle of the convex polygon so that the corresponding weight of the triangle is the sum of the weights on the triangles in the triangle.

You can define a variety of weighting FUNCTIONS ω on a triangle. For example, you can define ω (△vivjvk) = | vivj | + | vivk | + | vkvj |, where, | vivj | indicates the Euclidean distance between a vertex VI and a VJ. Corresponding to this weight function, the optimal triangle is the smallest string Yangtze river delta.

(1) optimal substructure

The problem of optimal triangle partitioning for convex polygon is of the optimal sub-structure nature. In fact, if the convex (n + 1) side is P = <v0, V1 ,..., An Optimal triangle of VN> T contains the triangle v0vkvn, 1 ≤ k ≤ n-1, then the right of T is the sum of three parts, that is, the right of the triangle v0vkvn, sub-multilateral form <v0, V1 ,..., VK> rights and permissions <VK, VK + 1 ,..., Vn>. It can be asserted that the triangle division of the two subpolygon determined by T is also the best, because if <v0, V1 ,..., VK> or <VK, VK + 1 ,..., If you create a smaller weight triangle in VN>, t is not the yundun of the optimal triangle.

(2) recursive structure of the weights corresponding to the optimal triangle partitioning

First, we define T [I, j] (1 ≤ I <j ≤ n) as convex subpolygon <vi-1, Vi ,..., The weight corresponding to the optimal triangle of VJ>, that is, the optimal value. For convenience, set the degraded polygon <vi-1, VI> with a weight of 0. According to this definition, the optimal weight of the convex (n + 1) side polygon P to be calculated is t [1, N].

The value of T [I, j] can be recursively calculated using the optimal sub-structure. Because the weight of the degraded 2-vertex polygon is 0, T [I, I] = 0, I = ,..., N. When J I is greater than or equal to 1, the sub polygon <vi-1, Vi ,..., VJ> has at least three vertices. The value of T [I, j] is the value of T [I, K] plus the value of T [k + 1, J, coupled with the weight of △vi-1vkvj, and in the I ≤ k ≤ J-1 range to take the minimum. Therefore, t [I, j] can be recursively defined:

 

(3) Calculate the optimal value

The following section describes the convex (n + 1) edge P = <v0, V1 ,..., The Dynamic Programming Algorithm minimum_weight for the optimum weight of the triangle division of VN>. The input is convex multilateral P = <v0, V1 ,..., The Weight Function ω of VN>, the output is the optimal T [I, j] and the T [I, K] + T [k + 1, J] + ω (△vi-1vkvj) optimal Position (k =) s [I, j], 1 ≤ I ≤ j ≤ n.

Procedure minimum_weight (p, W );

Begin

N = length [p]-1;

For I = 1 to n do t [I, I]: = 0;

For LL = 2 to n do

For I = 1 to n-LL + 1 do

Begin

J = I + ll-1;

T [I, j] = ∞;

For k = I to J-1 do

Begin

Q = T [I, K] + T [k + 1, J] + ω (△vi-1vkvj );

If q <t [I, j] Then

Begin

T [I, j] = Q;

S [I, j] = K;

End;

End;

End;

Return (t, s );

End;

The minimum_weight _ algorithm occupies θ (N2) space and consumes θ (N3 ).

(4) construct an optimal triangle

As we can see, for any 1 ≤ I ≤ j ≤ n, the algorithm minimum_weight in the calculation of each sub-polygon <vi-1, Vi ,..., The weight T [I, j] corresponding to the optimal triangle partitioning of VJ> is also in the S [I, j] records the position of the third vertex of the triangle in this optimal triangular Division consisting of an edge (or string) vi-1vj. Therefore, the optimal sub-structure is used, and the s [I, j], 1 ≤ I ≤ j ≤ n, convex (n + l) edge shape P = <v0, V1 ,..., The optimal triangular division of VN> can be easily constructed within the Euclidean (n) time.

 

Exercise:

1. Car Refueling problems:

There are m gas stations on a highway with a length of 1 km. Their locations are P (I = 1, 2 ,......, M), while the car tank is filled with oil (the tank can support up to k liters of fuel), can travel n kilometers. Design a scheme to minimize the number of refuel times that the vehicle passes through the road (the vehicle is filled with oil when it leaves ).

2. Shortest Path:

There is a network that requires the shortest path from a vertex to another vertex.

3. Horse Jumping problem:

On the 8*8 checkboard, start from any specified checklist and find a path for the horse to go through every grid and only once.

4. binary tree traversal

5. Backpack Problems

6. Use the divide and conquer method to multiply two big Integers

7. Set x1, x2 ,..., XN is N points in a straight line. If we want to overwrite these N points with a closed interval of unit length, how many such closed intervals do we need?

8. When three numbers A, B, and C are arranged in sequence by relation "<" and "=", there are 13 different order relations:

A = B = C, A = B <C, A <B = C, A <B <C, A <C <B, A = C <B, B <A = C,

B <A <C, B <C <a, B = C <a, c <A = B, C <A <B, C <A <B.

To sort n numbers in sequence, design a dynamic planning algorithm to calculate the number of minutes of different order relationships.

9. There is a single player game: with N (2 <= n <= 200) Heap slices, each heap sequence is numbered 0 to n-1. In extreme cases, some heap may have no slices. During the game, only several slices of a stack can be moved at a time to the adjacent stacks of the stack. As specified

I heap K sheets move to the I-1 (I> 0) heap, and move K slices to the I + 1 (I <n-1) heap. Therefore, when two stacks are adjacent to the I stack, the I stack originally had at least two K slices. When only one heap is adjacent to the I stack, the I stack originally had at least K slices.

The goal of the game is to move the slices of a given heap number and the number of slices on each heap according to the above rules, and eventually make the slices of each heap the same. In order to make the movement less frequent, make the following Estimation for which heap of slices to move and how many slices to move:

Set

CI: Number of slices of the I heap (0 <= I <n, 0 <= CI <= 200 );

V: Average number of slices per heap;

AI: the number of slices that can be obtained from the adjacent heap of the I heap.

The estimation method is as follows:

V = C0 + a1-a0 a1 = V + a0-c0

V = C1 + A0 + a2-2a1 a2 = V + 2a1-a0-c1

........ ..........

V = Ci + ai-1 + AI + 1-2ai ai + 1 = V + 2ai-ai-1-ci

We do not want to accurately obtain A0 to an-1, but do the following: If A0 is 0, we can calculate A1 to an-1 according to the formula above. The program finds the minimum value in a and deducts all A values to the minimum value, so that the number of slices removed from each heap is greater than or equal to 0.

The following greedy policy is used in actual operations:

(1) Search for each heap in sequence from the first heap. If you find that the slice can be removed from the I heap, move it once. That is, the adjacent heap of the I heap takes the ai slices from the I heap. The number of I heap slices can be taken from I heap to adjacent heap. If I heap is at both ends (I = 0 I = N-1), Ci> = AI is required; if the I heap is an intermediate heap, Ci> = 2ai is required.

(2) Among all stacks with AI> 0, the number of slices that have the largest number of slices is also the largest during the split process. When using policy (1) to search for and move a stack that does not meet the condition (1), this policy is used to make all the stacks with AI> 0, the heap with the largest number of slices is taken away by its adjacent heap.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.