"Many useful algorithms are recursive in structure. These algorithms typically follow the idea of divide and conquer ." This statement illustrates the importance of the sub-governance idea and also specifies the relationship between Recursion and sub-governance. recursion is a structure, a means of implementation, and the sub-governance is the concept contained in it.
Divide and conquer: divides the original problem into several subproblems that are small but similar to the original problem, and recursively solves these subproblems, then combine the solutions of these subproblems to establish the solution of the original problem.
The sub-governance mode has three steps in recursion at each layer:
Decomposition: divide the problem into several subproblems. The subproblems are in the same form as the original problems, but the scale is smaller;
Merge: Solve the subproblem recursively. If the subproblem is small enough, stop recursion.
Merge: Merge the subproblem into the original problem.
Sometimes, in addition to sub-problems that are in the same form as the original problem, we also need to solve sub-problems that are not exactly the same as the original problem, we consider solving these subproblems as part of the merging process.
The above is an overview of the idea of divide governance.
Let's first think about sorting. The insertion sorting complexity is Theta (N ^ 2), which reduces the problem scale and breaks down to two n/2. This improves the performance by two times and reduces the problem scale, the problem of improving performance mainly lies in whether the problem can be solved?
Assume that a sequence a [1 .. n], first for a [1... n/2] insert and sort a [n/2 + 1... n] Insert sorting. can we obtain a [1 .. n] of the sorted array?
Of course this is acceptable. Set the sorted a [1 .. n/2] is L1, sorted A [n/2 + 1 .. n] Is L2. First, remove the smaller number of two numbers (A [1] and a [n/2 + 1] at the beginning of the two sequences, and then remove this number, repeat this process. So what is the complexity of this process? Will sorting be more efficient than general insert sorting?
Pseudocode:
MERGR( A,p,q,r) n1=q-p+1 n2=r-q let L[1..n1+1] and R[1..n2+1] be new arrays for i=1 to n1 L[i]=A[p+i-1] for j= 1 to n2 R[j]=A[q+j] L[n1+1]=INF R[n2+1]=INF i=1 j=1 for k=p to r if L[i]<=R[j] A[k]=L[i]i=i+1 else A[k]=R[j]j=j+1
The above is the pseudo code of the merge step, where a [p .. q] And a [q + 1 .. r] has been sorted, and finally merged to get a [p... r] is also sorted.
Here is a knowledge point:
Sentinel. L and R arrays are added with an element set as the Sentinel, Which is infinite. What is the role of setting a sentinel? Simplified code. Without these two Infinity values, we have to check whether l or R has all of them taken out and whether I <N1 and j <N2 meet the requirements. After setting the Sentinel, you do not have to check the settings.
Analysis:
1. the correctness of this program; k-dominated loop unchanged:
In each iteration, the sub-array a [p .. k-1] contains L [1 .. n1 + 1] and R [1 .. minimum K-P elements in N2 + 1.
This cycle is not variable and can be proved by three stages (when k = P is initialized, hold, when k = R + 1 is terminated)
2. time complexity. Theta (N)
Merge sort pseudo code:
Merge-sort (A, P, Q, R) If P <r q = floor (p + r)/2) // The floor is rounded down to merge-sort (A, p, q) Merge-sort (A, q + 1, R) Merge (A, P, Q, R) // when P = R is, there is only one element, it is ordered, so no processing is required, and a direct response is returned.
C ++ code implementation:
Const int INF = int_max; void Merge (int * a, int P, int Q, int R) {int n1 = Q-p + 1; // N1 is the total amount of the Left array int n2 = r-Q; // N2 is the total amount of the right array int * l = new int [N1 + 1]; // The left array, store a [p .. q]. Add an element here and set the int x r = new int [n2 + 1]; // right array to store a [q + 1, R]. add one more element to set the Sentinel // to fill with L and rfor (INT I = 0; I <N1; ++ I) {L [I] = A [p + I] ;}for (INT I = 0; I <N2; ++ I) {R [I] = A [q + 1 + I];} // set the Sentinel L [N1] = inf; R [n2] = inf; // initialize the counting elements I and jint I = 0, j = 0; // merge a [p. q] And a [q + 1 .. r], generate a [p .. r] and sorted for (int K = P; k <= r; ++ K) {If (L [I] <= R [J]) {A [k] = L [I]; I + = 1 ;}else {A [k] = R [J]; j + = 1 ;}} delete [] l; Delete [] r;} void merge_sort (int * a, int P, int R) {If (P <r) {int q = (p + r) /2; merge_sort (A, p, q); merge_sort (A, q + 1, R); merge (A, P, Q, R);} elsereturn ;}
Test cases:
int main(){int A[6]={5,2,4,6,1,3};merge_sort(A,0,5);for(int i=0;i<6;++i){std::cout<<A[i]<<" ";}char c=getchar();}
The complexity analysis of recursive algorithms depends on Recursive Algorithms. If T (n) represents the complexity, then:
Evaluate T (n) using the recursive tree method:
The calculated cost is cnlgn, that is, the complexity is Theta (nlgn)