Leetcode algorithm of five main algorithms

Source: Internet
Author: User
Divide and conquer algorithm

I. Basic CONCEPTS

In computer science, the partition method is a very important algorithm. The literal explanation is "divide and conquer", which is to divide a complex problem into two or more identical or similar child problems, and then the problem is divided into smaller sub problems ... Until the last child problem can be simply solved directly, the solution of the original problem is the merging of the solution of the child problem. This technique is the basis for many efficient algorithms, such as sorting algorithms (Quick Sort, merge sort), Fourier transform (Fast Fourier transform) ...

The computational time required for any problem that can be solved by a computer is related to its scale. The smaller the scale of the problem, the easier it is to solve it directly, and the less computational time is needed to solve it. For example, for the ordering of n elements, no calculation is required when n=1. When n=2, just make a comparison to arrange the order. N=3 when the comparison can only be 3 times, .... And when n is larger, the problem is not so easy to deal with. It is sometimes quite difficult to solve a larger problem directly.

Ii. Basic ideas and strategies

The design idea of divide-and-conquer method is: To divide a big problem that is difficult to solve directly into the same problem of smaller scale, so as to divide and conquer each other.

Divide-and-conquer strategy is: for a problem of scale n, if the problem can be easily resolved (for example, the size of small) is directly resolved, otherwise, it is decomposed into K-scale sub problems, which are independent of each other and are the same as the original problem, recursively solve these child problems, and then merge the solutions of each child problem to obtain the solution of the original problem. This algorithm design strategy is called Partition method.

If the original problem can be divided into K-child problem, 1<k≤n, and these sub problems can be solved and can use the solution of these sub problems to solve the original problem, then this partition method is feasible. Sub-problems generated by the divide-and-conquer method are often small models of the original problem, which is convenient for the use of recursive techniques. In this case, the repeated application of the partition method can make the child problem and the original problem type consistent and its scale is shrinking, finally make the sub-problem narrow to easily directly find out its solution. This naturally leads to the generation of recursive processes. Partition and recursion, like twin twins, are often applied in the design of algorithms, and thus produce many efficient algorithms.

III. application of the partition law

The problems that can be solved by the partition method generally have the following characteristics:

1 The scale of the problem can be reduced to a certain extent to be easily solved

2 The problem can be decomposed into several smaller identical problems, that is, the problem has the best substructure property.

3 The solution of the sub problem decomposed by the problem can be merged into the solution of the problem;

4 The problem is separated from each other, that is, the child problem does not contain a common child problem.

The first feature is that most of the problems can be satisfied, because the computational complexity of the problem is generally increased as the scale of the problem increases;

The second characteristic is the premise of applying the partition method, which is also satisfied by most problems, which reflects the application of recursive thinking;

The third feature is the key, whether the use of the partition method is entirely dependent on whether the problem has a third characteristic, if the first and second characteristics, but not the third feature, you can consider using greedy or dynamic programming method.

The fourth characteristic relates to the efficiency of divide and conquer method, if each child problem is not independent then divide the method to do many unnecessary work, repeatedly solve the common child problem, although can divide the rule method, but the general use dynamic programming method is better.

Iv. Basic steps in the division of Laws

The divide-and-conquer method has three steps on each level of recursion:

Step1 decomposition: The original problem is decomposed into a number of small, independent, and the same form of the original problem of the child problem;

Step2 Solution: If the child problem is small and easy to be solved, it is directly resolved, otherwise the sub problems are solved recursively.

Step3 Merging: The solution of each child problem is merged into the solution of the original problem.

Its general algorithm design pattern is as follows:

Divide-and-conquer (P)

1. If | P|≤n0

2. Then return (ADHOC (P))

3. Decompose p into smaller sub problems P1, P2,..., Pk

4. For i←1 to K

5. Do Yi←divide-and-conquer (pi) recursive solution pi

6. T←merge (y1,y2,..., yk) Merge child issues

7. Return (T)

which | P| represents the scale of the problem p, N0 is a threshold, which means that when problem p is not larger than n0, the problem is easily solved and no further decomposition is necessary. ADHOC (P) is the basic sub algorithm in the partition method, which is used to solve the problem p directly in small scale. Therefore, when the scale of P is not more than n0, it is solved directly by the algorithm Adhoc (p). The algorithm merge (Y1,y2,..., YK) is a merging sub algorithm in the partition method, which is used to P1 the sub problem of P, P2 the corresponding solution Y1,y2 of,..., PK,..., yk Merge into P solution.

The analysis of the complexity of the divide-and-conquer method

A divide-and-conquer method divides the problem of scale n into K-scale n/m problem. Set the decomposition threshold n0=1, and adhoc solution size of 1 of the problem cost 1 units of time. Then, the solution of the problem is decomposed into K-Sub problem and merge the solution of K-sub problem into the original problem by using the F (N) unit time. Using t (n) to indicate the size of the solution is | The calculation time required for the p|=n problem is:

T (n) = k T (n/m) +f (n)

The solution of the equation is obtained by iterative method:

The recursive equation and its solution only give the value of T (n) when n equals m, but if the T (n) is considered to be sufficiently smooth, then the value of T (n) Fang by n equals m can estimate the growth rate of T (N). It is generally assumed that T (N) is monotonically rising, thus when mi≤n<mi+1, T (MI) ≤t (n) <t (mi+1).

Some classical problems to be solved by using divide-and-conquer method

(1) Two-point search

(2) Large integer multiplication

(3) Strassen matrix multiplication

(4) Board cover

(5) Merge sort

(6) Quick Sort

(7) Linear time selection


(8) The closest point to the problem

(9) Round robin Calendar

(10) Hanoi

The thought process of designing the procedure according to the Divide and conquer method

In fact, it is similar to the mathematical induction, to find a solution to the problem equation formula, and then design the recursive program according to the equation formula.

1, must first find the minimum problem size of the solution method

2, and then consider the solution when the scale of the problem increases

3, find the recursive function of the solution (various scale or factor), design recursive program can be.
Dynamic Programming

I. Basic CONCEPTS

The dynamic planning process is that each decision depends on the current state and then causes the state to be transferred. A decision sequence is produced in a state of change, so the process of solving a problem in a multi-stage optimization decision is called dynamic programming.

Ii. Basic ideas and strategies

The basic idea is similar to the divide-and-conquer method, which is to decompose the problem to be solved into several sub problems (stages), solve the sub stage in order, and provide useful information for the solution of the last child problem. When solving any of the child problems, we list various possible local solutions, preserving those local solutions that are likely to be optimal and discarding the other local solutions by decision. In order to solve each child problem, the last child problem is the solution of the initial problem.

Because most of the problems of dynamic programming have overlapping sub problems, in order to reduce the repetition computation, we only solve each sub problem once and save the different states of different stages in a two-dimensional array.

The biggest difference with the partition method is: Suitable for the problem solved by the dynamic programming method, the sub problems obtained after decomposition are often not independent of each other (i.e. the solution of the next sub stage is based on the solution of the last sub stage and further solved).

Iii. Status of Application

The problem that can be solved by dynamic programming is generally 3 characters:

(1) Optimization principle: If the optimal solution of the problem contains the solution of the sub problem is also optimal, it is said that the problem has the best substructure, that is, to meet the optimization principle.

(2) No validity: that is, once the state of a certain stage is determined, it is not affected by the decision after this state. That is, the process after a state does not affect the previous state, only the current state.

(3) There are overlapping child problems: that is, the child problem is not independent, a child problem in the next stage of decision-making may be used many times. (This property is not a necessary condition for dynamic programming, but without this property, the dynamic programming algorithm has no advantage compared with other algorithms)

Iv. Basic Steps for solving

The problem that dynamic programming deals with is a multistage decision problem, which starts from the initial state, and achieves the end state by choosing the intermediate stage decision. These decisions form a sequence of decisions, while at the same time defining an active route (usually the optimal activity route) for completing the entire process. As shown in the figure. The design of dynamic programming has a certain pattern, which usually goes through the following several steps.

Initial state →│ decision 1│→│ decision 2│→ ... →│ decision n│→ End State

Fig. 1 schematic diagram of dynamic planning decision process

(1) Dividing stage: According to the time or spatial characteristics of the problem, the problem is divided into several stages. In the Division stage, the attention division after the stage must be ordered or sortable, otherwise the problem can not be solved.

(2) Determining state and state variables: the various objective situations in which the problem is developed into various stages are expressed in different states. Of course, the choice of state should be satisfied with no validity.

(3) Determine the decision and write out the state transition equation: Because the decision and state transition have a natural connection, state transfer is based on the previous stage of state and decision to export this phase of the state. So if a decision is made, the state transition equation can be written. But in fact it is often reversed to determine the decision method and the state transition equation based on the relationship between the states of the two neighboring phases.

(4) Finding the boundary condition: the given state transition equation is a recursive type, which requires a recursive termination condition or boundary condition.

In general, the state transition equation (including boundary conditions) can be written as long as the problem-solving phase, State and state transition decisions are determined.

Practical applications can be designed in the following simplified steps:

(1) Analyze the properties of the optimal solution and describe its structural characteristics.

(2) Recursive definition of the optimal solution.

(3) to calculate the optimal value with a bottom-up or Top-down memory Mode (Memo method)

(4) Based on the information obtained from the calculation of the optimal value, the optimal solution of the structure problem

Five, the implementation of the description of the algorithm

The main difficulty of dynamic programming is the theoretical design, that is, the above 4 steps to determine, once the design is completed, the implementation part will be very simple.

Using dynamic programming to solve problems, the most important thing is to determine the three elements of dynamic planning:

(1) The stage of the problem (2) the state of each phase

(3) The recurrence relationship between the previous stage and the latter stage.

The recursive relationship must be transformed from a minor problem to a larger one, from this point of view, dynamic programming can often be implemented with recursive procedures, but because recursion can take advantage of the solution of the previously saved child problem to reduce the duplication of computation, so for large-scale problems, there is a recursive incomparable advantage, This is also the core of the dynamic programming algorithm.

The three elements of dynamic programming are determined, and the whole solving process can be described by an optimal decision table. The optimal decision table is a two-dimensional table, in which the row represents the stage of the decision, the column represents the problem state, and the data that the form needs to be filled out generally corresponds to the optimal value of a certain state at some stage of the problem (e.g. shortest path, longest common subsequence). , the maximum value, etc.), the process of filling out the form is based on the recursive relationship, starting from 1 rows and 1 columns, to row or column priority order, in turn fill out the table, and finally according to the data of the entire table through a simple choice or operation to obtain the optimal solution of the problem.

F (n,m) =max{f (N-1,m), F (n-1,m-w[n)) +p (N,m)}

VI. basic framework of dynamic programming algorithm

Greedy Algorithm

First, the basic concept:

The so-called greedy algorithm means that when solving a problem, always make the best choice in the present. In other words, not from the overall optimal consideration, what he made is only in a sense of the local optimal solution.

Greedy algorithm has no fixed algorithm frame, the key of algorithm design is greedy strategy choice. It is important to note that the greedy algorithm does not get the whole optimal solution for all problems, the greedy strategy must have no validity, that is, the process after a state will not affect the previous state, only with the current state.

Therefore, the greedy strategy used must be carefully analyzed whether it satisfies no effect.

Second, the basic idea of greedy algorithm:

1. Establish a mathematical model to describe the problem.

2. The problem of solving is divided into several sub issues.

3. To solve each problem, we get the local optimal solution of the sub problem.

4. The solution of the problem of the butt problem is synthesized by the local optimal solution of the original solution.

Three, the greedy algorithm applies the question

The premise of greedy strategy is: local optimal strategy can lead to global optimal solution.

In fact, greedy algorithms are rarely used. In general, the analysis of a problem is applicable to the greedy algorithm, you can choose the problem of several actual data analysis, you can make a judgment.

Four, the realization frame of greedy algorithm

Starting from an initial solution of the problem;

While (you can go one step toward a given total goal)

{

Using feasible decision, a solution element of feasible solution is obtained.

}

A feasible solution that is problematic by the combination of all solution elements;

Five, the choice of greedy strategy

Because greedy algorithm can only solve the global optimal solution by solving the local optimal solution, we must pay attention to judge whether it is suitable to use greedy algorithm strategy and find the solution is the optimal solution of the problem.

Vi. Analysis of examples

Here is a try to solve the greedy problem, greedy solution is really good, but not the best solution.

[Knapsack problem] There is a backpack, the backpack capacity is m=150. There are 7 items that can be divided into any size.

The total value of the items packed in the backpack is required to be as great as possible, but not exceeding the total capacity.

Goods A B C D E F G

Weight 35 30 60 50 40 10 25

Value 10 40 30 50 35 40 30

Analysis:

Target function: ∑pi max

The binding condition is that the total weight of the loaded goods does not exceed the backpack capacity: ∑wi<=m (m=150)

(1) According to greedy strategy, each pick the most valuable items into the backpack, the results are optimal.

(2) The optimal solution can be obtained if the items of the smallest weight are selected each time.

(3) Each time the unit weight value of the most important items, become the strategy to solve the subject.

It is noteworthy that the greedy algorithm is not completely unused, and once the greedy strategy has been established, it is an efficient algorithm.

Greedy algorithm is still a very common algorithm, this is because it is simple and easy, the construction of greedy strategy is not very difficult.

Unfortunately, it needs to be proved before it can really apply to the algorithm of the problem.

In general, the proof of the greedy algorithm revolves around: the optimal solution of the whole problem must be derived from the optimal solution of the sub problem in the greedy strategy.

For the example of the 3 greedy strategies are not tenable (cannot be proved), explained as follows:

(1) Greedy strategy: Choose the most valuable person. Counter Example:

W=30

Item: A B C

Weight: 28 12 12

Value: 30 20 20

According to the policy, first select Item A, then you can no longer select, but choose B, C is better.

(2) Greedy strategy: Choose the smallest weight. Its counter example is similar to that of the first strategy.

(3) Greedy strategy: Select the items with the largest unit weight value. Counter Example:

W=30

Item: A B C

Weight: 28 20 10

Value: 28 20 10

According to the strategy, three items per unit weight value, the program can not be judged according to the existing strategy, if you choose a, the answer is wrong.

Backtracking Method

1. Concept

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.