Algorithm analysis and design paper

Source: Internet
Author: User

1: Recursive algorithm

programming techniques that directly or indirectly invoke the program are called Recursive algorithms ( recursion).

A recursive algorithm is a method in which a procedure or function calls itself directly or indirectly in its definition or description. It often translates a large and complex problem into a smaller problem similar to the original one to solve.

Recursive strategy requires a small amount of code to describe the process of solving the problem of multiple repeated calculations, greatly reducing the code of the program. The advantage of recursion is that the infinite set of objects is defined with finite statements, and the program written with recursive thought is often very simple and understandable.

Recursion requires boundary conditions, progressive progression and recursive return segments, and recursive progression when boundary conditions are not met; When the boundary conditions are met, recursive returns (when using recursion, there is no need to have a definite recursive exit, otherwise recursion will continue indefinitely).

Recursive algorithm is inefficient in solving problems, and in the process of recursive call, the system opens up a stack to store the return points and local variables of each layer. Too many recursion times can cause stack overflow and so on.

Example: Fibonacci Series

"Fibonacci" is the Italian mathematician Leonardo's-Fibonacci first study of a recursive series, each of his items is equal to the first two boxes of the number of columns of the first few are 1,1,2,3,5 and so on. In the biological mathematics, many biological phenomena will appear Fibonacci that tangent sequence, the Fibonacci sequence of two adjacent to the Golden Division of the number, its recursive definition is:

Recursive algorithm for Fibonacci sequence:

int fib (int n)

{

if (n<=1) return 1;

return fib (n-1) +fib (n-2);

}

The algorithm is very inefficient, repeated recursion is too many times, usually using recursive method:

int FIB[50]; Save intermediate results with an array

void Fibonacci (int n)

{

Fib[0] = 1;

FIB[1] = 1;

for (int i=2; i<=n; i++)

Fib[i] = fib[i-1]+fib[i-2];

}

using the array to save the data that has been obtained before, reducing the number of recursion, improve the calculation Method efficiency.

2: Divide and conquer algorithm

in computer science, divide-and-conquer method is a very important algorithm. The literal explanation is "divide and Conquer", which is to divide a complex problem into two or more identical or similar sub-problems, then divide the problem into smaller sub-problems ... Until the last sub-problem can be solved simply, the solution of the original problem is the merger of the solution of the sub-problem. This technique is the basis for many efficient algorithms, such as sorting algorithms (fast sorting, merge sorting), Fourier transform (Fast Fourier transform) ...

The computational time required for any problem that can be solved with a computer is related to its size. The smaller the problem, the easier it is to solve it directly, and the less computational time it takes to solve it. For example, for the ordering of n elements, when n=1, no calculations are required. N=2, the order can be sorted once compared. N=3 only 3 times to compare, .... And when n is large, the problem is not so easy to deal with. It is sometimes quite difficult to solve a problem of a larger scale directly.

The design idea of divide-and-conquer method is: To divide a big problem which is difficult to be solved directly to some small-scale same problem, in order to conquer, divide and conquer.

divide and conquer the strategy is: for a size of n, if the problem can be easily solved (for example, the size of n smaller) is directly resolved, otherwise it is divided into K small sub-problem, these sub-problems are independent of each other and the original problem form, recursively solve these sub-problems, Then the solution of each sub-problem is combined to get the solution of the original problem. This algorithm design strategy is called divide-and-conquer method.

if the original problem can be divided into k sub-problem, 1<k≤n, and these sub-problems can be solved and can use the solution of these sub-problems to find out the solution of the original problem, then this method of division is feasible. The sub-problems produced by the divide-and-conquer method are often the smaller models of the original problems, which provides convenience for the use of recursive techniques. In this case, the sub-problem can be consistent with the original problem type and its scale shrinks continuously, so that the sub-problem is reduced to a very easy way to find out the solution directly. This naturally leads to the generation of recursive processes. Division and recursion are often applied in the algorithm design, and many efficient algorithms are produced.

The problems that can be solved by the method of division and administration generally have the following characteristics:

(1) The scale of the problem is reduced to a certain extent and can be easily solved.

(2) The problem can be decomposed into several small-scale same problems, that is, the problem has the best substructure properties.

(3) The solution of sub-problems decomposed by this problem can be combined into the solution of the problem.

(4) The problem is separated from each other, that is, the sub-problem does not include the common sub-problem.

Example: Binary search technology

given n elements a[0:n-1], you need to find a specific element x in these n elements. To sort n elements first, you can use the C + + Standard Template Library function sort (). It is easy to think of a sequential search method that compares elements in a[0:n-1] until the element x is found or the entire array has been searched to determine that x is not in it.

therefore, in the worst case, the sequential search method requires O (n) times comparison. Binary search technology takes full advantage of the condition that n elements have been ordered, uses the idea of divide-and-conquer strategy, and, in the worst case, completes the search task with O (log n) time.

0

3

4

5

6

7

9

Ten

7

14

17

21st

27

31

38

42

46

53

75

The basic idea of a binary search algorithm is to divide n elements into two halves of roughly the same number, taking A[N/2] and X as a comparison.

if X=A[N/2], then X is found and the algorithm terminates.

if X

if X>A[N/2], we simply continue to search for x in the right half of array A.

Two-part search algorithm

Array a[] has n elements, sorted in ascending order, element x to be found

Template<class type>

int BinarySearch (Type a[],const type& x,int N)

{

int left=0; Left boundary

int right=n-1; Right border

while (Left<=right)

{

int middle= (left+right)/2; Midpoint

if (X==a[middle]) return middle; Find x, return the position in the array

if (X>a[middle]) left=middle+1;

else right=middle-1;

}

return-1; X not found

}

3: Dynamic Planning

Dynamic Planning ( The Dynamic PROGROMMING,DP algorithm is often used to solve problems with some kind of optimal properties. There may be many possible solutions to this type of problem. Each solution corresponds to a value, and a solution with the best value needs to be found. The dynamic programming algorithm is similar to the partition method, and its basic idea is to decompose the problem into several sub-problems, solve the problem first, then get the solution of the original problem from the solution of these sub-problems. Different from the partition method, the sub-problems which are solved by the dynamic programming are often not independent of each other. If this kind of problem is solved by divide-and-conquer method, the number of sub-problems is too large, and some sub-problems are not known to be repeated for many times. If you can save the answers to the resolved sub-questions, and then find the answers you've already answered when needed, you can avoid a lot of repetitive calculations and save time. You can use a table to record the answers to all the solved sub-problems. Whether or not the sub-problem is used later, as long as he has been counted, the result is filled into the table. This is the basic idea of the dynamic programming method. The specific dynamic programming algorithms are varied, but they have the same format.

Steps to design a dynamic programming algorithm:

(1) Identify the properties of the optimal solution and characterize its structure.

(2) recursive definition of optimal value (write dynamic programming equation).

(3) Calculate the optimal value in the bottom-up way.

(4) An optimal solution is constructed according to the information obtained when calculating the optimal value.

the validity of the dynamic programming algorithm depends on the two important properties of the problem itself: the property of the optimal substructure and the overlapping property of the sub-problem.

(1) optimal substructure: When the optimal solution of the problem contains the optimal solution of the sub-problem, the problem is called the optimal substructure property.

(2) overlapping sub-problem: When using recursive algorithm to solve the problem from top to bottom, each generation of sub-problems is not always a new problem, some sub-problems have been repeatedly calculated multiple. The dynamic programming algorithm takes advantage of the overlapping nature of this seed problem, only once for each sub-problem, then saves it in a table and uses the solution of these sub-problems as much as possible later.

Example: the problem of matrix continuous product

matrix chain multiplication Problem : given n matrix {a1,a2,..., an}, where AI and ai+1 are multiplicative, i=1,2...,n-1. How to determine the calculation order of the multiplication of the matrix, making it necessary to calculate the number of times of the matrix multiplication in this order.
multiplies a series of matrices ( Ai .... AJ) is divided into two parts, i.e. (Aiai+1...ak) (Ak+1ak+2....aj), where the position of the k is guaranteed to be minimized by multiplying the left and right brackets.

#include <iostream.h>

#include <stdlib.h>

#include <limits.h>

#include <time.h>

#define MAX_VALUE 100

#define N 201//number of multiply matrices (n-1)

#define RANDOM () rand ()%max_value//control the size of the rows and columns of the matrix

int c[n][n], s[n][n], p[n];

int matrixchain (int n)//3 for loop implementation

{for (int k=1;k<=n;k++)

c[k][k]=0;

for (int d=1;d<n;d++)

for (int i=1;i<=n-d;i++)

{int j=i+d;

C[i][j]=int_max;

for (int m=i;m<j;m++)

{int t=c[i][m]+c[m+1][j]+p[i-1]*p[m]*p[j];

if (T<c[i][j])

{

c[i][j]=t;

S[i][j]=m;

}

}

}

return c[1][n];

}

void Print (int s[][n],int i,int j)//The Order of calculation of the output matrix continuous product

{if (i==j)

cout<< "A" <<i;

Else

{

cout<< "(";

Print (S,i,s[i][j]); The left half sub-matrix multiplication

Print (S,S[I][J]+1,J); The left half sub-matrix multiplication

cout<< ")";

}

}

int Lookupchain (int i,int j)//Memo method

{

if (c[i][j]>0)

return C[I][J];

if (I==J)

return 0;

int U=lookupchain (i,i) +lookupchain (i+1,j) +p[i-1]*p[i]*p[j];

S[i][j]=i;

for (int k=i+1;k<j;k++)

{

int T=lookupchain (i,k) +lookupchain (k+1,j) +p[i-1]*p[k]*p[j];

if (t<u)

{

u=t;

S[i][j]=k;

}

}

C[i][j]=u;

return u;

}

void Main ()

{

Srand ((int) time (NULL));

for (int i=0;i<n;i++)//Randomly generated array p[], each element of the value of the fan circumference 1~max_value

P[i]=random (+1);

clock_t Start,end;

double elapsed;

Start=clock ();

cout<< "Count:" <<matrixchain (N-1) <<endl; 3-Heavy For loop implementation

cout<< "Count:" <<lookupchain (1,n-1) <<endl; Memo method

End=clock ();

Elapsed= (Double) (End-start));///clocks_per_sec;

cout<< "Time:" <<elapsed<<endl;

Print (s,1,n-1); The order of calculation of the output matrix continuous product

cout<<endl;

}

the time complexity of the two algorithms is O (N3), and as the amount of data increases, the time spent on the memo method is longer; I think it is because of the recursive algorithm, as the amount of data increases, the number of calls to the function increases, and the more time the statement is executed, so the call function consumes more time.

4: Greedy algorithm
The greedy algorithm refers to the Problem Solving , always make the best choice at the moment. In other words, not considering the overall optimality, he makes a local optimal solution in a sense .

Greedy algorithm is not to all problems can get the overall optimal solution, the key is the choice of greedy strategy, the choice of greedy strategy must have no effect, that a state of the previous process will not affect the future state, only related to the current state.

The basic idea of greedy algorithm is to proceed one step at a certain initial solution of the problem, according to an optimization measure, each step must ensure that the local optimal solution can be obtained. Each step considers only one data, and his selection should satisfy the conditions of local optimization. If the next data and part of the optimal solution are no longer feasible, the data is not added to the partial solution until all the data is enumerated, or the algorithm can no longer be added to stop.

The problem that the greedy algorithm solves usually has the following characteristics:

as the algorithm progresses, the other two collections are accumulated: one containing the candidate that has already been considered and selected, and the other containing the candidates that have been considered but discarded.

there is a function to check whether a collection of candidate objects provides a solution to the problem. This function does not consider whether the solution at this time is optimal.

There is also a function to check whether a collection of candidate objects is feasible, that is, the possibility of adding more candidate objects to the collection to obtain a solution. As with the previous function, the optimality of the workaround is not considered at this time.

The selection function can indicate which of the remaining candidates is the most promising solution to the problem. Finally, the target function gives the value of the solution.

in order to solve the problem, we need to find a set of candidate objects, which can optimize the objective function, and the greedy algorithm is carried out step-by-step. Initially, the collection of candidate objects selected by the algorithm is empty. In each of the next steps, based on the selection function, the algorithm selects from the remaining candidates the object that most likely constitutes the solution. If it is not feasible to add the object to the collection, the object is discarded and no longer considered, otherwise it is added to the collection. Each time you expand the collection and check whether the collection constitutes a solution. If the greedy algorithm works correctly, the first solution that is found is usually the optimal one.

There are several aspects that should be considered in solving problems using greedy algorithms

(1) candidate set A: In order to construct a solution for the problem, there is a candidate set a as a possible solution to the problem, that is, the final solution of the problem is taken from the candidate set a.

(2) The solution set S: Along with the greedy choice, the solution set S expands continuously until it constitutes the complete solution of the satisfying problem.

(3) solve the function solution: Check whether the solution set S constitutes the complete solution of the problem.

(4) Choose function Select: Greedy strategy, which is the key to the greedy method, it points out which candidate is the most promising solution to the problem, the selection function is usually related to the object function.

(5) feasible function feasible: it is feasible to check whether a candidate object is added to the solution set, that is, whether the constraint is satisfied after the solution set expands.

4.7//A is the input set of the problem, which is the candidate collection

Greedy (A)

{

  s={}; Initial solution set as empty

  while (not solution (s))//Set S does not constitute a solution to the problem

  {

    x = select (A); Greedy selection in candidate set a

    If feasible (s, x)//Determine if the solution after adding X in the set S is feasible

      S = s+{x};

      A = a-{x};

  }

  return S;

} The general flow of greedy algorithms

(1) candidate set A: The final solution of the problem is taken from candidate set a.

(2) The solution set S: The solution set S expands continuously until it constitutes a complete solution to the problem.

(3) solve the function solution: Check whether the solution set S constitutes the complete solution of the problem.

(4) Selection function Select: Greedy strategy, which is the key to the greedy algorithm.

(5) The feasible function feasible: whether the constraint condition is satisfied after the expansion of the solution set.

5: Backtracking method

backtracking method, also known as Heuristic method, is an optimal search method, which is searched forward according to the conditions of optimization to achieve the goal. The backtracking method uses the thought of trial and error, and it tries to solve a problem by step. When it comes to solving a problem in steps, when it tries to discover that the existing step answer is not valid, it will cancel the previous or even previous steps, and try again to find the answer to the question again through the other possible steps.

In the solution space Tree of all solutions containing the problem, the solution space tree is explored in depth from the root node based on the strategy of depth-first search. When exploring a node, it is necessary to determine whether the node contains the solution of the problem, if it is included, to continue the exploration from the node, if the node does not contain the solution of the problem, it will go back to its ancestor node by layer. (In fact, the backtracking method is the depth-first search algorithm for implicit graphs). If you use backtracking to find all the solutions to the problem, go back to the root, and all the viable subtrees of the root node are searched and finished. If you use backtracking to find any solution, you can end up searching for a solution to the problem .

problems solved by backtracking method P, usually expressed as: for a known n-tuple (X1,X2,...,XN) composed of a State space e={(X1,X2,...,XN) ∣xi∈si, i=1,2,...,n}, given about a component in the N-tuple of a set of constraints D, All n tuples that satisfy all the constraints of D in E are required. Where SI is the definition domain of the Component XI, and | si| Limited, I=1,2,...,n. We call any n-tuple in E that satisfies all the constraints of D as a solution to the problem p.

Solution Problem The simplest method of P is the enumeration method, that is, all n-tuples in e are individually detected to satisfy all the constraints of D and, if satisfied, a solution to the problem p. But obviously, the amount of computation is quite large.

6. Branch Clearance Algorithm

The branch-and-bound method often searches for the solution space Tree of the problem in the form of breadth preference or minimum cost (maximum benefit) preference. In the branch-and-gauge approach, each Slipknot point has only one chance to become an extension node. Once the Slipknot point becomes an extension node, all its son nodes are generated at once. In these sons ' nodes, the node of the son that causes the non-optimal solution or causes the Slipknot is discarded, and the remaining son nodes are added to the table of points. Thereafter, removing a node from the Slipknot point table becomes the current expansion node and repeats the above node expansion process. This process persists until you find the desired solution or the Slipknot point table is empty.

The branch-and-bound method often searches for the solution space Tree of the problem in the form of breadth preference or minimum cost (maximum benefit) preference.

in the branch-and-gauge approach, each Slipknot point has only one chance to become an extension node. Once the Slipknot point becomes an extension node, all its son nodes are generated at once. In these sons ' nodes, the node of the son that causes the non-optimal solution or causes the Slipknot is discarded, and the remaining son nodes are added to the table of points.

Thereafter, removing a node from the Slipknot point table becomes the current expansion node and repeats the above node expansion process. This process persists until you find the desired solution or the Slipknot point table is empty.

Two common methods of branch and gauge :

(1) Queue-type (FIFO) branch-and-gauge method

follow the queue FIFO ( FIFO) principle selects the next node as the extension node.

(2) Priority queue-Type branch-gauge method

The node with the highest priority is selected as the current expansion node according to the priority specified in the priority queue.

the difference between branch-bound method and backtracking method

(1) The objective of the algorithm is to find out all the solutions that satisfy the constraint in the spatial tree, while the branch-bound method is to find a solution satisfying the constraint condition, or to find the optimal solution in a certain sense in the solution satisfying the constraint condition.

(2) different ways of searching: backtracking searches the solution space tree in a depth-first way, while the branch-bound rule searches the solution space tree in breadth-first or least-cost-first way.

Algorithm analysis and design paper

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.