Analysis of five classical algorithms

Source: Internet
Author: User


First, the basic concept

Dynamic PlanningThe process is that each decision depends on the current state, and then the transition of the state is caused. A decision sequence is generated in the state of change, so the process of solving the problem by this multistage optimization decision is called dynamic programming.
Second, basic ideas and strategies
The basic idea is similar to the divide-and-conquer method, and the problem to be solved is decomposed into several sub-problems (stages), and the solution of the sub-stage is solved in order, which provides useful information for the solution of the latter sub-problem. When solving any sub-problem, the various possible local solutions are listed, and the local solutions that are likely to achieve the best are preserved by decision, and other local solutions are discarded. Solve each sub-problem in turn, the last sub-problem is the solution of the initial problem.
Because the problem of dynamic planning solves most overlapping sub-problems, in order to reduce the repetition, we only solve each sub-problem once and save the different states of different stages in a two-dimensional array.
The biggest difference between the division and the method is: suitable for the problem solved by the dynamic programming method, the sub-problems obtained after decomposition are often not independent of each other (that is, the next sub-stage solution is based on the solution of the previous sub-stage, and further solution).

Third, the application of the situation
There are 3 properties that can be used to solve the problem of dynamic programming:
(1) Optimization principle: If the optimal solution of the problem contains sub-problem solution is also optimal, it is said that the problem has the optimal sub-structure, that is, to meet the optimization principle.
(2) No effect: that is, once a stage state is determined, it is not affected by the decision after this state. In other words, the subsequent process of a State does not affect the previous state, only the current state.
(3) There are overlapping sub-problems: That is, sub-problems are not independent, a sub-problem in the next stage of decision-making may be used more than once. (This nature is not a necessary condition for dynamic programming, but without this nature, the dynamic programming algorithm has no advantage over other algorithms)

Iv. Basic steps of the solution
The problem that dynamic programming deals with is a multi-stage decision-making problem, which usually starts from the initial state and reaches the end state through the choice of the intermediate stage decision. These decisions form a sequence of decisions, while defining an active route to complete the process (usually the optimal active route). The design of dynamic planning has a certain pattern, which usually goes through the following steps.
Initial state →│ decision 1│→│ decision 2│→ ... →│ decision n│→ End State
Figure 1 Dynamic planning decision process
(1) Division stage: According to the question Time or the space characteristic, divides the question into several stages. In the partitioning phase, note that after the division of the stage must be ordered or sortable, otherwise the problem can not be solved.
(2) Determining state and state variables: the various objective situations in which the problem is developed into various stages are expressed in different states. Of course, the choice of state to meet no-no validity.
(3) Determine the decision and write out the state transition equation: Because decision-making and state transfer have a natural connection, state transfer is to export the state of this stage according to the state and decision of the previous stage. So if the decision is made, the state transfer equation can be written out. In fact, it is often done in turn to determine the decision-making method and the state transition equation based on the relationship between the states of the adjacent two phases.
(4) Finding the boundary condition: the given State transfer equation is a recursive type, which requires a recursive terminating condition or boundary condition.
In general, the state transition equation (including boundary conditions) can be written as long as the phase, State and state transfer decisions of the problem are resolved.
Practical applications can be designed in a few simplified steps, as follows:
(1) Analyzing the properties of the optimal solution and characterizing its structural characteristics.
(2) A recursive definition of the optimal solution.
(3) Calculate the optimal value from the bottom-up or top-down memory (Memo method)
(4) According to the information obtained when calculating the optimal value, the optimal solution of the structural problem

Five, the implementation of the description of the algorithm
The main difficulty of dynamic programming is the theoretical design, that is, the above 4 steps to determine, once the design is complete, the implementation of the part will be very simple.
Using dynamic programming to solve problems, the most important thing is to determine the three elements of dynamic planning:
(1) phase of the problem (2) status of each stage
(3) The recurrence relationship between the previous phase and the latter one.
The recursive relationship must be from the minor problem to the transformation between the larger problem, from this point of view, dynamic planning can often be implemented with recursive programs, but because recursion can take full advantage of the previous saved sub-problem of the solution to reduce duplication, so for large-scale problems, there is a recursive incomparable advantage, This is also at the heart of the dynamic programming algorithm.
The three elements of dynamic programming are determined, the whole process can be described by an optimal decision table, the optimal decision table is a two-dimensional table, where the row represents the stage of decision-making, the column represents the state of the problem, the table needs to fill in the data generally corresponding to the problem at some stage of a state of the optimal value (such , maximum value, etc.), the process of filling in the form is based on the recurrence of the relationship, starting from 1 rows and 1 columns, in the order of row or column priority, fill in the table, and finally according to the entire table data through simple trade-offs or calculations to obtain the optimal solution of the problem.
F (n,m) =max{f (N-1,m), F (N-1,m-w[n]) +p (N,M)}

VI. basic framework of dynamic programming algorithms
Code
1 for (j=1; j<=m; j=j+1)//First Stage
2 Xn[j] = initial value;
3
4 for (i=n-1; i>=1; i=i-1)//other n-1 stages
5 for (j=1; j>=f (i); j=j+1)//f (i) expressions related to I
6 Xi[j]=j=max (or min) {g (Xi-1[j1:j2]), ..., G (xi-1[jk:jk+1])};
8
9 T = g (x1[j1:j2]); A scheme for solving the optimal solution of the whole problem by the optimal solution of sub-problem
10
Print (x1[j1]);
12
for (i=2; i<=n-1; i=i+1)
15 {
+ t = T-xi-1[ji];
18
for (j=1; j>=f (i); j=j+1)
if (T=xi[ji])
At a break;
25}

Divide and conquer algorithm
First, the basic concept
In computer science, divide-and-conquer method is a very important algorithm. The literal explanation is "divide and conquer", which is to divide a complex problem into two or more identical or similar sub-problems, then divide the problem into smaller sub-problems ... Until the last sub-problem can be solved simply, the solution of the original problem is the merger of the solution of the sub-problem. This technique is the basis for many efficient algorithms, such as sorting algorithms (fast sorting, merge sorting), Fourier transform (Fast Fourier transform) ...
The computational time required for any problem that can be solved with a computer is related to its size. The smaller the problem, the easier it is to solve it directly, and the less computational time it takes to solve it. For example, for the ordering of n elements, when n=1, no calculations are required. N=2, the order can be sorted once compared. N=3 only 3 times to compare, .... And when n is large, the problem is not so easy to deal with. It is sometimes quite difficult to solve a problem of a larger scale directly.
Second, basic ideas and strategies
The design idea of divide-and-conquer method is: To divide a big problem which is difficult to be solved directly to some small-scale same problem, in order to conquer, divide and conquer.
Divide and conquer the strategy is: for a size of n, if the problem can be easily solved (for example, the size of n smaller) directly resolved, otherwise it is divided into K small sub-problem, these sub-problems are independent and the original problem form, the recursive solution of these sub-problems, and then the solution of the sub-problems to the original problem. This algorithm design strategy is called divide-and-conquer method.
If the original problem can be divided into K sub-problem, 1<k≤n, and these sub-problems can be solved and can use the solution of these sub-problems to find out the solution of the original problem, then this method of division is feasible. The sub-problems produced by the divide-and-conquer method are often the smaller models of the original problems, which provides convenience for the use of recursive techniques. In this case, the sub-problem can be consistent with the original problem type and its scale shrinks continuously, so that the sub-problem is reduced to a very easy way to find out the solution directly. This naturally leads to the generation of recursive processes. Division and recursion like a pair of twin brothers, often at the same time applied in the algorithm design, and thus produce many efficient algorithms.
Iii. conditions of application of the Division and Administration law
The problems that can be solved by the method of division and administration generally have the following characteristics:
1) The scale of the problem is reduced to a certain extent and can be easily resolved
2) The problem can be decomposed into several small-scale same problems, that is, the problem has the best substructure properties.
3) The solution of sub-problems decomposed by this problem can be combined into the solution of the problem;
4) The problem is separated from each other sub-problems, that is, the sub-problem does not include the common sub-sub-problem.
The first characteristic is that most problems can be satisfied, because the computational complexity of the problem is usually increased with the increase of the size of the problem;
The second feature is the premise of applying the method of division and treatment. It is also the most problems can be satisfied, this feature reflects the application of recursive thinking;
The third feature is the key, whether the use of divide-and-conquer method depends entirely on whether the problem has a third feature, if the first and second features, and does not have a third feature, you can consider using greedy or dynamic programming method.
The fourth feature relates to the efficiency of division and treatment, if the sub-problems are not independent, then divide and conquer the law to do a lot of unnecessary work, repeated to solve the common sub-problem, although the use of divided treatment method, but generally with the dynamic programming method is better.
Iv. Basic steps of the Division and Administration law
The divide-and-conquer method has three steps on each level of recursion:
Step1 decomposition: Decomposition of the original problem into several small, independent, and the original problem form the same sub-problem;
Step2 Solution: If the sub-problem is small and easy to solve the direct solution, or recursively solve each sub-problem
Step3 Merging: The solution of each sub-problem is merged into the solution of the original problem.
Its general algorithm design pattern is as follows:
Divide-and-conquer (P)
1. If | P|≤n0
2. Then return (Adhoc (P))
3. Decompose p into smaller sub-problem P1, P2,..., Pk
4. For i←1 to K
5. Do Yi←divide-and-conquer (pi) recursive solution pi
6. T←merge (y1,y2,..., yk) merging sub-issues
7. Return (T)

Among them | P| indicates the size of the problem p, N0 is a threshold value, indicating that when the size of the problem p does not exceed n0, the problem is easily solved and no further decomposition is necessary. Adhoc (P) is the basic sub-algorithm in the division method, which is used to solve the problem P of small scale directly. Therefore, when the scale of P does not exceed n0, it is solved directly with the algorithm Adhoc (p). The algorithm merge (Y1,y2,..., YK) is the merging sub-algorithm in the division method, which is used to P1 the sub-problem of P, P2 the corresponding solution y1,y2,..., YK of,..., PK to the solution of P.

The complexity analysis of the division and Treatment method
A divide-and-conquer method divides the problem of scale n into a sub-problem of K-scale n/m. The decomposition threshold is set to n0=1, and the problem of adhoc solution Scale of 1 is 1 unit time. Then the original problem is decomposed into k sub-problem and the solution of K-sub-problem is merged into the solution of the original problem by merging it with F (n) unit time. Use T (n) to indicate that the scale of the solution is | The calculation time required for the p|=n problem is:
T (n) = k T (n/m) +f (n)
The solution of the equation is obtained by iterative method:
The recursive equation and its solution give only the value of T (n) when n equals m operational, but if T (n) is considered to be smooth enough, the value of T (N) can be estimated at the speed of T (N) at the operational of N equals M. It is generally assumed that T (N) is monotonically ascending, thus when mi≤n<mi+1, T (MI) ≤t (n) <t (mi+1).
Vi. some classical problems that can be solved by using divide-and-conquer method

(1) Two-point search
(2) Large integer multiplication
(3) Strassen matrix multiplication
(4) Board cover
(5) Merge sort
(6) Quick Sort
(7) Linear time selection

(8) Closest point to problem
(9) Round robin Calendar
(10) Hanoi
The thinking process of designing the procedure according to the Division and treatment method

In fact, it is similar to the mathematical induction, to find solutions to solve the problem equation formula, and then design the recursive program according to the equation formula.
1, must first find the minimum problem size of the solution method
2, then consider the solution method with the problem scale increase
3. After finding the recursive function formula (various scales or factors), the recursive program can be designed.



Greedy Algorithm
First, the basic concept:

The so-called greedy algorithm refers to, in the problem solving, always make the best choice at present. In other words, not considering the overall optimality, he makes only the local optimal solution in a certain sense.
Greedy algorithm has no fixed algorithm framework, the key of algorithm design is the choice of greedy strategy. It must be noted that the greedy algorithm does not have the overall optimal solution to all problems, the greedy strategy chosen must have no effect, that is, the process after a state does not affect the previous state, only related to the current state.
Therefore, the greedy strategy used must carefully analyze whether it satisfies the no-effect.

Second, the basic idea of greedy algorithm:
1. Build a mathematical model to describe the problem.
2. Divide the problem of solving into several sub-problems.
3. To solve each sub-problem, the local optimal solution of sub-problem is obtained.
4. The solution of the problem is solved by the local optimal solution to the original solution problem.

Three, the greedy algorithm applies the question
The premise of greedy strategy is that local optimal strategy can lead to global optimal solution.
In fact, greedy algorithms are rarely used. In general, whether a problem analysis is applicable to greedy algorithm, you can choose the problem of a few real data analysis, you can make judgments.

Four, greedy algorithm implementation framework
Starting from an initial solution of a problem;
While (one step forward for a given total target)
{
A solution element of the feasible solution is obtained by using the feasible decision;
}
A feasible solution to the problem of the combination of all solution elements;

Five, the choice of greedy strategy
Because the greedy algorithm can only solve the global optimal solution by solving the local optimal solution, we must pay attention to whether the problem is suitable for the greedy algorithm and whether the solution found is the best solution.

Vi. Analysis of examples
Here is a greedy algorithm can be tried to solve the problem, greedy solution is really good, but not the optimal solution.
[Knapsack problem] There is a backpack, the backpack capacity is m=150. There are 7 items that can be divided into any size.
It is required to maximize the total value of the items loaded into the backpack, but not to exceed the total capacity.
Item A B C D E F G
Weight 35 30 60 50 40 10 25
Value 10 40 30 50 35 40 30
Analysis:
Objective function: ∑pi max
The restriction is that the total weight of the loaded item does not exceed the backpack capacity: ∑wi<=m (m=150)
(1) According to the greedy strategy, each time to select the most valuable items loaded into the backpack, the results are optimal?
(2) Can the optimal solution be obtained for each item with the smallest weight selected?
(3) Each time the unit weight value of the most valuable items, become the solution of the strategy.
It is worth noting that the greedy algorithm is not completely not available, once the greedy strategy has been proved, it is an efficient algorithm.
Greedy algorithm is also a very common algorithm, this is because it is simple, the construction of greedy strategy is not very difficult.
Unfortunately, it needs to be proven before it can really be used in the algorithm of the problem.
In general, the proof of greedy algorithm revolves around: the optimal solution of the whole problem must be obtained by the optimal solution of the sub-problem in the greedy strategy.
For the 3 kinds of greedy strategies in the example, it is impossible to set up (can not be proved), explained as follows:
(1) Greedy strategy: Choose the most valuable person. Counter Example:
W=30
Item: A B C
Weight: 28 12 12
Value: 30 20 20
According to the strategy, first select Item A, then you can no longer select, however, choose B, C is better.
(2) Greedy strategy: Choose the smallest weight. Its inverse example is similar to the first strategy's counter example.
(3) Greedy strategy: Select items with the greatest value per unit weight. Counter Example:
W=30
Item: A B C
Weight: 28 20 10
Value: 28 20 10
According to the strategy, three items per unit weight value, the program can not be judged according to the existing policy, if you choose a, the answer is wrong.


Backtracking Method
1. Concept
The backtracking algorithm is actually a kind of enumeration-like search attempt process, mainly in search attempts to find the solution of the problem, when the discovery has not satisfied the solution condition, the "backtracking" return, try another path.
The backtracking method is an optimal search method, which is searched forward according to the selection criteria to achieve the goal. However, when the exploration of a step, found that the original choice is not good or not reach the goal, then return to the one-step re-selection, such a failure to return to go back to the technology as backtracking, and meet the backtracking condition of a state point of the "backtracking point."
Many complex, large-scale problems can use backtracking, there is a "common problem-solving method" laudatory.
2. Basic ideas
In the solution space Tree of all solutions containing the problem, the solution space tree is explored in depth from the root node based on the strategy of depth-first search. When exploring a node, it is necessary to determine whether the node contains the solution of the problem, if it is included, to continue the exploration from the node, if the node does not contain the solution of the problem, it will go back to its ancestor node by layer. (In fact, the backtracking method is the depth-first search algorithm for implicit graphs).
If you use backtracking to find all the solutions to the problem, go back to the root, and all the viable subtrees of the root node are searched and finished.
If you use backtracking to find any solution, you can end up searching for a solution to the problem.
3. The general steps of solving problems by backtracking method:
(1) For the given problem, determine the solution space for the problem:
First, the solution space of the problem should be clearly defined, and the solution space of the problem should contain at least one (optimal) solution of the problem.
(2) Define the extended search rules for the nodes.
(3) Searching the solution space in depth first, and using pruning function to avoid invalid search during the searching process.
4. Algorithm Framework
(1) Problem framework
The solution to the problem is an n-dimensional vector (a1,a2,........., an), the constraint is that the AI (i=1,2,3,....., N) satisfies a certain condition, recorded as f (AI).
(2) Non-recursive backtracking framework
1:int A[n],i;
2: Initialize array a[];
3:I = 1;
4:while (I>0 (with the road) and (not reaching the target))//Not back to the head
5: {
6:if (i > N)//Search to leaf node
7: {
8: Search for a solution, output;
9:}
10:else//Processing of element I
11: {
12:a[i] the first possible value;
13:while (A[i] does not meet constraints and is within the search space)
14: {
15:a[i] the next possible value;
16:}
17:if (A[i] within the search space)
18: {
19: Identify the resources occupied;
20:i = i+1; Extend the next node.
21:}
22:else
23: {
24: The state space occupied by the cleanup;//Backtracking
25:i = i–1;
26:}
27:}
(3) Recursive algorithm framework
Backtracking is a depth-first search of the solution space, and in general it is simpler to use recursive functions to achieve backtracking, where I is the depth of the search, and the framework is as follows:
1:int A[n];
2:try (int i)
3: {
4:if (I>n)
5: output result;
6:else
7: {
8:for (j = Nether; J <= Upper bound; j=j+1)//enumeration I all possible paths
9: {
10:if (Fun (j))//Meet the CLEARANCE function and constraints
11: {
12:a[i] = j;
13: ...//other operations
14:try (i+1);
15: Clean up before backtracking (such as a[i], such as empty value);
16:}
17:}
18:}
19:}


Branch and gauge method
First, the basic description
Similar to the backtracking method, it is also an algorithm for searching problem solution on the solution space tree T of the problem. However, under normal circumstances, the branch-bound method and the backtracking method are different in solving the target. The objective of backtracking is to find out all the solutions that satisfy the constraints in T, while the solution target of branch-bound method is to find out a solution satisfying the constraint condition, or to find the solution that makes the value of a certain objective function reach a maximum or a minimum in the solution satisfying the constraint condition, that is, the optimal solution in some sense.
(1) Branch search algorithm
The so-called "branch" is the use of breadth-first strategy, search all branches of e-node, that is, all adjacent nodes, discard the nodes that do not meet the constraints, the remaining nodes to join the Slipknot point table. Then select a node from the table as the next e-node to continue the search.
The next e-node is selected in different ways, and there are several different ways to search for branches.
1) FIFO Search
2) LIFO Search
3) Priority Queue Search
(2) Branch clearance search algorithm
II. General process of branch-bound method
Because of the difference of the target, the branch-bound method and the backtracking method are different in the search method on the solution space tree T. The backtracking method searches for the solution space tree T in the depth-first way, while the branch-bound rule searches the solution space tree T in breadth or in the least-expensive way.
The search strategy for branch-bound method is: At the extension node, the husband becomes the node of all his sons (branches), and then selects the next extended pair of points from the current Slipknot point table. In order to effectively select the next extension node to speed up the search process, at each Slipknot point, calculate a function value (gauge), and according to these computed function values, select one of the most advantageous nodes from the current Slipknot point table as the extension node, and make the search move toward the branch of the solution space tree with the best solution, In order to find an optimal solution as soon as possible.
The branch-and-bound method often searches for the solution space Tree of the problem in the form of breadth preference or minimum cost (maximum benefit) preference. The solution space Tree of the problem is an ordered tree representing the solution space of the problem, and the common subset tree and the permutation tree. When searching for the solution space Tree of the problem, the branch-bound method and the backtracking method have different extension methods to the current extension nodes. In the branch-and-gauge approach, each Slipknot point has only one chance to become an extension node. Once the Slipknot point becomes an extension node, all its son nodes are generated at once. In these sons ' nodes, the knot of the sons that led to the non-optimal solution or the suboptimal solution was discarded, and the rest of the sons were added to the Slipknot point table. Thereafter, removing a node from the Slipknot point table becomes the current expansion node and repeats the above node expansion process. This process persists until you find the solution you are seeking or the Slipknot point table is empty.
Some differences between the backtracking method and the branch gauge method
There are some problems in fact, whether using backtracking or branch-bound method can be well resolved, but others are not. Perhaps we need some specific analysis-when exactly when to use the branch gauge and when to use backtracking?
Some differences between backtracking and branch-gauge methods:
Methods the common application of data structure node storage characteristics in the search mode of the solution space tree is stored.
Backtracking Depth-first search stack all viable sub-nodes of the Slipknot point are traversed before being ejected from the stack to find all the solutions that satisfy the constraints
Branch-bound method breadth-first or minimum-consumption-priority search queue, priority queue each node has only one chance to become a Slipknot point. Finding the optimal solution of a solution or a specific meaning satisfying the constraint condition

Analysis of five classical algorithms

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.