Five commonly used algorithms--divide-and-conquer method, dynamic programming, backtracking method, branch boundary method, greedy algorithm

Source: Internet
Author: User
Tags sort advantage
five commonly used algorithms--divide-and-conquer method, dynamic programming, backtracking method, branch boundary method, greedy algorithm

Divide and conquer algorithm
First, the basic concept

In computer science, divide-and-conquer method is a very important algorithm. The literal explanation is "divide and conquer", which is to divide a complex problem into two or more identical or similar sub-problems, then divide the problem into smaller sub-problems ... Until the last sub-problem can be solved simply, the solution of the original problem is the merger of the solution of the sub-problem. This technique is the basis for many efficient algorithms, such as sorting algorithms (fast sorting, merge sorting), Fourier transform (Fast Fourier transform) ...

The computational time required for any problem that can be solved with a computer is related to its size. The smaller the problem, the easier it is to solve it directly, and the less computational time it takes to solve it. For example, for the ordering of n elements, when n=1, no calculations are required. N=2, the order can be sorted once compared. N=3 only 3 times to compare, .... And when n is large, the problem is not so easy to deal with. It is sometimes quite difficult to solve a problem of a larger scale directly.


--------------------------------------------------------------------------------

Second, basic ideas and strategies

The design idea of divide-and-conquer method is: To divide a big problem which is difficult to be solved directly to some small-scale same problem, in order to conquer, divide and conquer.

Divide and conquer the strategy is: for a size of n, if the problem can be easily solved (for example, the size of n smaller) directly resolved, otherwise it is divided into K small sub-problem, these sub-problems are independent and the original problem form, the recursive solution of these sub-problems, and then the solution of the sub-problems to the original problem. This algorithm design strategy is called divide-and-conquer method.

If the original problem can be divided into K sub-problem, 1<k≤n, and these sub-problems can be solved and can use the solution of these sub-problems to find out the solution of the original problem, then this method of division is feasible. The sub-problems produced by the divide-and-conquer method are often the smaller models of the original problems, which provides convenience for the use of recursive techniques. In this case, the sub-problem can be consistent with the original problem type and its scale shrinks continuously, so that the sub-problem is reduced to a very easy way to find out the solution directly. This naturally leads to the generation of recursive processes. Division and recursion like a pair of twin brothers, often at the same time applied in the algorithm design, and thus produce many efficient algorithms.


--------------------------------------------------------------------------------

Iii. conditions of application of the Division and Administration law

The problems that can be solved by the method of division and administration generally have the following characteristics:

1) The scale of the problem is reduced to a certain extent and can be easily resolved

2) The problem can be decomposed into several small-scale same problems, that is, the problem has the best substructure properties.

3) The solution of sub-problems decomposed by this problem can be combined into the solution of the problem;

4) The problem is separated from each other sub-problems, that is, the sub-problem does not include the common sub-sub-problem.

The first characteristic is that most problems can be satisfied, because the computational complexity of the problem is usually increased with the increase of the size of the problem;

The second feature is the premise of applying the method of division and treatment. It is also the most problems can be satisfied, this feature reflects the application of recursive thinking;

The third feature is the key, whether the use of divide-and-conquer method depends entirely on whether the problem has a third feature, if the first and second features, and does not have a third feature, you can consider using greedy or dynamic programming method.

The fourth feature relates to the efficiency of division and treatment, if the sub-problems are not independent, then divide and conquer the law to do a lot of unnecessary work, repeated to solve the common sub-problem, although the use of divided treatment method, but generally with the dynamic programming method is better.


--------------------------------------------------------------------------------

Iv. Basic steps of the Division and Administration law

The divide-and-conquer method has three steps on each level of recursion:

Step1 decomposition: Decomposition of the original problem into several small, independent, and the original problem form the same sub-problem;

Step2 Solution: If the sub-problem is small and easy to solve the direct solution, or recursively solve each sub-problem

Step3 Merging: The solution of each sub-problem is merged into the solution of the original problem.

Its general algorithm design pattern is as follows:

Divide-and-conquer (P)

1. If | P|≤n0

2. Then return (Adhoc (P))

3. Decompose p into smaller sub-problem P1, P2,..., Pk

4. For i←1 to K

5. Do Yi←divide-and-conquer (pi) recursive solution pi

6. T←merge (y1,y2,..., yk) merging sub-issues

7. Return (T)

Among them | P| indicates the size of the problem p, N0 is a threshold value, indicating that when the size of the problem p does not exceed n0, the problem is easily solved and no further decomposition is necessary. Adhoc (P) is the basic sub-algorithm in the division method, which is used to solve the problem P of small scale directly. Therefore, when the scale of P does not exceed n0, it is solved directly with the algorithm Adhoc (p). The algorithm merge (Y1,y2,..., YK) is the merging sub-algorithm in the division method, which is used to P1 the sub-problem of P, P2 the corresponding solution y1,y2,..., YK of,..., PK to the solution of P.


--------------------------------------------------------------------------------

The complexity analysis of the division and Treatment method

A divide-and-conquer method divides the problem of scale n into a sub-problem of K-scale n/m. The decomposition threshold is set to n0=1, and the problem of adhoc solution Scale of 1 is 1 unit time. Then the original problem is decomposed into k sub-problem and the solution of K-sub-problem is merged into the solution of the original problem by merging it with F (n) unit time. Use T (n) to indicate that the scale of the solution is | The calculation time required for the p|=n problem is:

T (n) = k T (n/m) +f (n)

The solution of the equation is obtained by iterative method:

The recursive equation and its solution give only the value of T (n) when n equals m operational, but if T (n) is considered to be smooth enough, the value of T (N) can be estimated at the speed of T (N) at the operational of N equals M. It is generally assumed that T (N) is monotonically ascending, thus when mi≤n<mi+1, T (MI) ≤t (n) <t (mi+1).

--------------------------------------------------------------------------------

Vi. some classical problems that can be solved by using divide-and-conquer method


(1) Two-point search
(2) Large integer multiplication
(3) Strassen matrix multiplication
(4) Board cover
(5) Merge sort
(6) Quick Sort
(7) Linear time selection

(8) Closest point to problem
(9) Round robin Calendar
(10) Hanoi

--------------------------------------------------------------------------------

The thinking process of designing the procedure according to the Division and treatment method


In fact, it is similar to the mathematical induction, to find solutions to solve the problem equation formula, and then design the recursive program according to the equation formula.
1, must first find the minimum problem size of the solution method
2, then consider the solution method with the problem scale increase
3. After finding the recursive function formula (various scales or factors), the recursive program can be designed.

The second of the five most common algorithms: Dynamic Programming algorithm
First, the basic concept

The dynamic planning process is that each decision relies on the current state and then causes the state to shift. A decision sequence is generated in the state of change, so the process of solving the problem by this multistage optimization decision is called dynamic programming.

Second, basic ideas and strategies

The basic idea is similar to the divide-and-conquer method, and the problem to be solved is decomposed into several sub-problems (stages), and the solution of the sub-stage is solved in order, which provides useful information for the solution of the latter sub-problem. When solving any sub-problem, the various possible local solutions are listed, and the local solutions that are likely to achieve the best are preserved by decision, and other local solutions are discarded. Solve each sub-problem in turn, the last sub-problem is the solution of the initial problem.

Because the problem of dynamic planning solves most overlapping sub-problems, in order to reduce the repetition, we only solve each sub-problem once and save the different states of different stages in a two-dimensional array.

The biggest difference between the division and the method is: suitable for the problem solved by the dynamic programming method, the sub-problems obtained after decomposition are often not independent of each other (that is, the next sub-stage solution is based on the solution of the previous sub-stage, and further solution).


--------------------------------------------------------------------------------


Third, the application of the situation

There are 3 properties that can be used to solve the problem of dynamic programming:

(1) Optimization principle: If the optimal solution of the problem contains sub-problem solution is also optimal, it is said that the problem has the optimal sub-structure, that is, to meet the optimization principle.

(2) No effect: that is, once a stage state is determined, it is not affected by the decision after this state. In other words, the subsequent process of a State does not affect the previous state, only the current state.

(3) There are overlapping sub-problems: That is, sub-problems are not independent, a sub-problem in the next stage of decision-making may be used more than once. (This nature is not a necessary condition for dynamic programming, but without this nature, the dynamic programming algorithm has no advantage over other algorithms)


--------------------------------------------------------------------------------

Iv. Basic steps of the solution

The problem that dynamic programming deals with is a multi-stage decision-making problem, which usually starts from the initial state and reaches the end state through the choice of the intermediate stage decision. These decisions form a sequence of decisions, while defining an active route (usually the optimal activity route) to complete the process. As shown in the figure. The design of dynamic planning has a certain pattern, which usually goes through the following steps.

Initial state →│ decision 1│→│ decision 2│→ ... →│ decision n│→ End State

Fig. 1 schematic diagram of dynamic planning decision process

(1) Division stage: According to the question Time or the space characteristic, divides the question into several stages. In the partitioning phase, note that after the division of the stage must be ordered or sortable, otherwise the problem can not be solved.

(2) Determining state and state variables: the various objective situations in which the problem is developed into various stages are expressed in different states. Of course, the choice of state to meet no-no validity.

(3) Determine the decision and write out the state transition equation: Because decision-making and state transfer have a natural connection, state transfer is to export the state of this stage according to the state and decision of the previous stage. So if the decision is made, the state transfer equation can be written out. In fact, it is often done in turn to determine the decision-making method and the state transition equation based on the relationship between the states of the adjacent two phases.

(4) Finding the boundary condition: the given State transfer equation is a recursive type, which requires a recursive terminating condition or boundary condition.

In general, the state transition equation (including boundary conditions) can be written as long as the phase, State and state transfer decisions of the problem are resolved.

Practical applications can be designed in a few simplified steps, as follows:

(1) Analyzing the properties of the optimal solution and characterizing its structural characteristics.

(2) A recursive definition of the optimal solution.

(3) Calculate the optimal value from the bottom-up or top-down memory (Memo method)

(4) According to the information obtained when calculating the optimal value, the optimal solution of the structural problem


--------------------------------------------------------------------------------

Five, the implementation of the description of the algorithm

The main difficulty of dynamic programming is the theoretical design, that is, the above 4 steps to determine, once the design is complete, the implementation of the part will be very simple.

Using dynamic programming to solve problems, the most important thing is to determine the three elements of dynamic planning:

(1) phase of the problem (2) status of each stage

(3) The recurrence relationship between the previous phase and the latter one.

The recursive relationship must be from the minor problem to the transformation between the larger problem, from this point of view, dynamic planning can often be implemented with recursive programs, but because recursion can take full advantage of the previous saved sub-problem of the solution to reduce duplication, so for large-scale problems, there is a recursive incomparable advantage, This is also at the heart of the dynamic programming algorithm.

The three elements of dynamic programming are determined, the whole process can be described by an optimal decision table, the optimal decision table is a two-dimensional table, where the row represents the stage of decision-making, the column represents the state of the problem, the table needs to fill in the data generally corresponding to the problem at some stage of a state of the optimal value (such , maximum value, etc.), the process of filling in the form is based on the recurrence of the relationship, starting from 1 rows and 1 columns, in the order of row or column priority, fill in the table, and finally according to the entire table data through simple trade-offs or calculations to obtain the optimal solution of the problem.


The third of the five most common algorithms: Backtracking method

1. Concept
The backtracking algorithm is actually a kind of enumeration-like search attempt process, mainly in search attempts to find the solution of the problem, when the discovery has not satisfied the solution condition, the "backtracking" return, try another path.

The backtracking method is an optimal search method, which is searched forward according to the selection criteria to achieve the goal. However, when the exploration of a step, found that the original choice is not good or not reach the goal, then return to the one-step re-selection, such a failure to return to go back to the technology as backtracking, and meet the backtracking condition of a state point of the "backtracking point."

Many complex, large-scale problems can use backtracking, there is a "common problem-solving method" laudatory.

2. Basic ideas
In the solution space Tree of all solutions containing the problem, the solution space tree is explored in depth from the root node based on the strategy of depth-first search. When exploring a node, it is necessary to determine whether the node contains the solution of the problem, if it is included, to continue the exploration from the node, if the node does not contain the solution of the problem, it will go back to its ancestor node by layer. (In fact, the backtracking method is the depth-first search algorithm for implicit graphs).

If you use backtracking to find all the solutions to the problem, go back to the root, and all the viable subtrees of the root node are searched and finished.

If you use backtracking to find any solution, you can end up searching for a solution to the problem.

3. The general steps of solving problems by backtracking method:
(1) For the given problem, determine the solution space for the problem:

First, the solution space of the problem should be clearly defined, and the solution space of the problem should contain at least one (optimal) solution of the problem.

(2) Define the extended search rules for the nodes.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.