1. Multiplication of large integers

There are two ways of doing this:

(1) "List Method"

(2) Divide and conquer law

2. Integer division Problem

Suppose that in all the different divisions of a positive integer n, the maximum addend is not greater than the number of m by using Q (n,m).

Therefore, the following basic and recursive items are established:

Basic items:

When n >=1, then Q (n,1) = 1;

Recursive items:

When m = n, then Q (n,m) =q (n,m-1) + 1;

When M < n, then Q (n,m) = Q (n,m-1) + q (n-m,m);

When M >n, then Q (n,m) =q (n,n);

Note: The starting point of the problem is to consider the relationship between the maximum addend and the number of divisions.

3. matrix multiplication problem: Finding the best order of precedence for matrix calculations

For example, Matrix A1a2a3a4a5a6, find a breakpoint (such as A4), The matrix is divided into two ((A1A2A3A4) and (A5A6)), so that this breakpoint can be the least computational amount ((A1A2A3A4) of the computational amount plus (A5A6) The calculation of the sum of the two sub-matrix multiplication results of the calculation, Is the total computational capacity of the Matrix A1A2A3A4A5A6).

This breakpoint is not known in advance, but it must exist. Use A[i:j] to remember as a matrix to connect the AI ... Aj, the breakpoint is K, if the breakpoint K divides the matrix into two parts: A[i:k] and a[k+1:j]; then the two sub-matrices must also require the minimum computational amount, which is the optimal substructure property. Sub-matrix multiplication is similar to the original problem, and there are common sub-problems, so it conforms to the requirement of dynamic programming algorithm design.

Use M[i][j] to save the minimum computational amount of the Matrix A[i:j], the following relationship is used:

When i = j, m[i][j] = 0;

When I < J, m[i][j] = mini<=k<j{M[i][k] + m[k+1][j]+ PI-1PKPJ}

The exponential time is spent if you use recursive calculations directly. Notice that in the recursive calculation process, the number of different sub-problems is only O (N2), in fact, for 1<=i<=j<=n different ordered pairs (i,j) correspond to different sub-problems. Many sub-problems are repeated in the recursive calculation process. Thus, for a two-dimensional table to store all the sub-problems of the solution, when the value of the solution M[i][j], found I<j, so, the two-dimensional table only need to use the upper triangular part of the space; if k = J–i; The value of M[i][j] depends on {m[p][q] | q-p < K, p < Q, p>=i, Q<= J}. Therefore, M[p][q] must have been found before solving m[i][j]. Thus, the order in which the sub-problems are solved is from the k=0,1,2,..., n-1 in order to solve m[i][j]. When the last step is k = n-1, it is also the value of solving m[1][n], which corresponds to the solution of the original problem.

4. the longest common subsequence

assumes that there are two sequence x={x1,x2,x3,..., xm-1,xm},y= {y1,y2,y3,..., Yn-1,yn}, The longest common subsequence of both of them is assumed to be z={z1,z2,z3,..., Zk-1,zk}. The following conclusions are drawn:

(1) When xm = yn, Zk = xm= yn, Zk-1 is the longest common subsequence of the sequence Xm-1 and Yn-1.

(2) when Xm≠yn, ZK≠XM, the ZK is the longest common subsequence of the sequence Xm-1 and yn.

(3) When Xm≠yn, Zk≠yn, the ZK is the longest common sub-sequence of the sequence XM and Yn-1.

because the longest common subsequence Z is unknown beforehand, it is also unknown when XM≠YN,ZK is sequence Xm-1 and yn or the longest common subsequence of the sequence XM and Yn-1. The longest common sub-sequences of sequence Xm-1 and yn and XM and Yn-1 were calculated respectively, and the longest common subsequence of x and Y was chosen for the older people. When xm = yn, the longest common subsequence of the sequence Xm-1 and Yn-1 is first calculated, and then the end is appended with XM (=yn).

Overlapping sub-problems: When calculating the longest common subsequence of the sequence Xm-1 and yn and XM and Yn-1, respectively, the two contain the longest common subsequence of the sub-problem Xm-1 and Yn-1.

Optimal substructure property: The longest common subsequence of the original two series contains the longest common subsequence of its two subsequence.

Use C[i][j] to record the length of the longest common subsequence of a sequence xi={x1,x2,..., xi-1,xi} and Yj={y1,y2,..., Yj-1,yj}. Then there is the following recursive relationship:

(1) when i = 0,j = 0, then c[i][j] = 0.

(2) when I,j > 0, xi = YJ, then c[i][j] = c[i-1][j-1] + 1.

(3) When I,j > 0, Xi≠yi, then c[i][j] = Max{c[i-1][j], c[i][j-1]}.

If recursion is direct, the amount of computation will be the exponential level. Therefore, using the bottom-up dynamic programming algorithm, from the smallest sub-problem to solve the more complex sub-problems, more complex sub-problems depend on the more simple sub-problem solved. From the (3) formula can get the calculation order (draw a two-dimensional table simulation): In the two-dimensional table first processing i=0 column, j=0 Line, and then from top to bottom, from left to right, one line of processing.

5. Maximum sub-segments and

Problem Solving Ideas:

(1) Simple algorithm: In order to find the i=0,1,2,..., n-1 The maximum number of sub-segments, and then select the maximum from the N calculation results as the solution of the original problem.

(2) Divide-and-conquer algorithm: The sequence is divided into two segments of equal length myopia, the maximal sub-paragraph either in the two paragraphs of any paragraph, or across the two paragraphs (must contain two segments of the demarcation point) of a sub-sequence. The maximal sub-segments of the two paragraphs are similar to the original problem, but the size is reduced by half and can be solved by recursion.

(3) Dynamic programming algorithm: The characteristic of dynamic programming algorithm is to solve the problem from the minimum problems, and the solution of the larger problem depends on the solution of the small problem. In a nutshell, with the solution of the small problem, the problem is solved gradually, until the original problem is solved. Using dynamic programming algorithm to solve the maximal sub-segment and the idea: in order to find the i=0,1,2,..., n-1 end of the sub-section of the maximum and (in the process of solving i=x, make full use of the previous i<x calculation results), n results to choose the largest one, is the original problem solution.

6.0-1 knapsack problem

Optimal substructure: After the first item has been trade-offs, all items in the back are considered a sub-problem of the 0-1 knapsack problem, similar to the original problem. The optimal inclusion sub-problem of the original problem.

Overlapping sub-problem: When an item after the operation of all the remaining items as a 0-1 knapsack problem of the sub-problem, and the first item after the operation of all the remaining items as a sub-problem of the 0-1 knapsack problem, the two cases produced sub-problems in the process of solving may have the same status and size of sub-problems.

7. Huffman code

The

8. single-source Shortest path

Dijkstra algorithm is a greedy algorithm for Jianhanyuan shortest path problem. The basic idea is to set up the vertex set S and continue to make greedy choices to augment this set. A vertex belongs to the set S when and only if the shortest path length from the source point to the vertex is known. Initially, S contains only the source. Set U is a vertex of figure g, the path from the source directly to u and from source to u and the middle only through the vertices in S is called a special path from source to u, and the shortest special path length corresponding to each vertex is recorded with the array Dist. dijkstra algorithm each time from the v-s u u add to s dist once s contains all of the vertices in V, Dist records the shortest path length from source to all other vertices. The

Dijkstra algorithm generates the shortest path between vertices to the source point in the order of increasing the length of the path.

9. Minimum spanning tree prim algorithm

The basic idea of the prim algorithm for the minimum spanning tree is to place s={1} first, and then, as long as S is the true subset of V, make the following greedy selection: Select the edge that satisfies the condition i∈s,j∈v-s, and c[i][j] the smallest, and add the vertex J to S. This process continues until s=v. All the edges selected in this process (altogether n-1 edges) constitute a minimum spanning tree of G.

To make the above greedy choice source a nature, this property is called the minimum Spanning tree nature (minimum spanning tree,mst). MST Property Description: Set g= (v,e) is connected with a weighted graph, U is a true subset of V. if (u,v) ∈e, and U∈u,v∈v-u, and on such an edge, (u,v) the right C[u][v] is the smallest, then there must be a minimum spanning tree of G contains the edge.

10. Minimum spanning Tree Kruskal algorithm

The basic idea of the minimum spanning tree Kruskal algorithm is that the n vertices of G are regarded as n isolated connected branches, and all edges are ordered from small to large. Then, starting from the first edge, view each horizon in ascending order of edge, and connect two different connected branches as follows: When you view the K-bar (V,W), if the endpoints V and W are the vertices of the current two different connected branches T1 and T2, use the Edge (V, W) connect the T1 and T2 into a connected branch and continue to view the k+1 edge, and if the endpoints V and W are in the same connected branch of the current one, view the k+1 edge directly. This process continues until one of the connected branches is left. At this point, the connected branch is a minimum spanning tree of G.

The minimum spanning tree Kruskal algorithm is characterized by the edge being selected each time, the end condition is the N independent connected branch connected to a connected branch, Time complexity O (Eloge), and the prim algorithm is characterized by the vertex is selected each time, the end condition is that all vertices are selected in a set, time complexity O (N2).

11. The method of solving the problem of single-source shortest path by molecular gauge (the following steps resolve the shortest path from the source point S to the target point T):

(1) The source vertex is expanded first, all its son nodes will be expanded, and then placed in the minimum priority queue storage Slipknot Point table, Slipknot point priority is the corresponding current path length (also known as the lower bound of the Slipknot point).

(2) Take a node from the minimum priority queue and expand it to investigate all the son nodes of the extension point (that is, to investigate multiple branches), If the current length of the son's node is the shortest path length known to the son node at the source node (Implementation method: Save all vertices with an array of the current distance from the source point) (the basic constraints), while the son node's current path length is less than the currently found shortest circuit length (nether condition), Add the son node to the Slipknot point table, otherwise, leave it.

(3) Repeat step 2 until the Slipknot point table is empty (it is necessary to test all Slipknot points in the Slipknot point table, because the path length for the first time to find the source point to the end point is not necessarily the shortest path, such as this: the last step extension operation may not lead to the optimal value of the occurrence, for example, two nodes, The lower bound of the two nodes is the same, the two nodes are expanded once again, reached the end point, the first is not necessarily the optimal value of the problem.

(4) All the sons nodes of the extension node, that is, all the branches, to determine whether to add the son node to the Slipknot point table has two pruning functions, one is the constraint function, and the other is the clearance function.

Constraint function: The current path length of the son node is the shortest path length known by the source node to the son node, which is the basic constraint that must be satisfied.

Bound function (Nether constraint): If the constraint function is satisfied, if a shortest path from the source to the endpoint has been obtained, The current path of the son's node is long (the current path of the son's node is long: a lower bound of the path length corresponding to all nodes in the subtree in the spatial tree that is rooted in the node) must be shorter than the shortest path to add the son node to the Slipknot point table. In the process of expanding the node, once the lower bound of a node is not less than the shortest path currently found, the algorithm cuts down the subtree with the node as its root. The purpose of the gauge function is to determine whether the current path selection has the opportunity to produce the optimal value. The gauge function is often compared with the optimal value found.

12. Priority queue-Type branch-and-gauge method to solve the problem of traveling salesman:

1. First, there are three issues to consider using the Priority Queue Branch-and-gauge method:

(1) How to design Slipknot point and Slipknot Point table, what is the priority mark of Slipknot point?

(2) Pruning function: constraint function and gauge function. How are these two functions determined?

(3) Will the solution of the problem be saved in the Slipknot point, or is it a solution to a subset tree or sort tree storage problem?

Answer these questions based on a combination of travel salesman questions:

A (1): With the priority queue (minimum heap) as the Slipknot node table, the priority flag of the point is represented by the lower bound lcost of the predicted subtree cost.

(2) To solve the problem of the travel salesman constraint function: Loop constraint, limit function: Prediction subtree cost lower bound lcost and the current optimal value comparison.

The purpose of loop constraint is to determine whether the current path satisfies the most basic loop requirements.

Prediction subtree cost lower bound lcost The goal is to compare with the current optimal value to determine whether the current path continues, that is, to determine whether a branch of the current expansion node has the opportunity to produce the optimal value of the problem.

One path contains all vertices and forms a loop, and this is accomplished by saving an array of all the vertices in each Slipknot point, and then deciding whether the final array will form a loop during the solution.

The method of predicting the lower bound of subtree cost: To find out the minimum out-of-edge cost of each vertex, the cost of the portion of the defined vertex order, plus the minimum out-of-edge cost of the remainder of the indeterminate vertex order, is the lower value of the Slipknot point forecast subtree cost.

(3) Since each Slipknot point encounters a different current situation, the solution to the current problem corresponds to the current situation is not the same, so you can choose to put the solution of the problem (immature) in each Slipknot point. The final solution to the problem of traveling salesman is to save it in a Slipknot point. The solution for constructing a partial sort tree to save the problem is temporarily not used.

Classic Algorithm Experience