Recursive Analysis and grouping Algorithms

Source: Internet
Author: User

Recursive analysis generally uses the main theorem. The secondary methods include substitution method and recursive tree method ~

Main Theorem:

Recursive tree:

The proof of the main theorem can be done through the recursive tree method;

 

The applicability of the Main Theorem is limited. In some cases, it cannot be included. In these cases, we need to use the recursive tree method,

The case1 of the Main Theorem is F (n) less than the nlogba polynomial time. The original theorem is described as F (n) = O (nlogba-ε) and ε> 0, it is slightly different from F (n) = Euclidean (nlogba) in case2, that is, F (n) is smaller than nlogba, but the excess is not polynomial time;

In addition, the difference between case2 and case3 is that F (n) is greater than nlogba, but if it is not greater than polynomial time, it cannot satisfy the main theorem;

The other is that F (n) in case3 does not meet the following conditions;

For example, if a recent point is sorted by a fast sort in the middle, the merge Time is nlgn, the recursive formula t (n) = 2 T (n/2) + nlgn, this situation is between case2 and case3, so the recursive tree is used:

T (n) = nlgn + N (lgn-lg2) + N (lgn-lg4) +... = nlgnlgn-N (lg2 + 2lg2 + 3lg2 +... + lgnlg2) = nlgnlgn-nlg2 (1 + lgn)/2 = nlgnlgn = nlg2n;

However, I found that the main theorem given by MIT is different from the introduction to algorithms, covering the above situations, as shown below:

This may be a situation;

So here I am taking an example that does not satisfy the Main Theorem ~

Therefore, when the main theorem is not satisfied, use the decision tree to bring it in! If the mathematical computation capability is strong, it can still be calculated. After all, the main theorem is proved by decision trees. If the mathematical ability is not strong, it indicates that it is difficult to prove it...

 

However, here is a way to prove the laziness, assuming that F (n) is in the NK form;

T (n) = at (N/B) + NK

T (N/B) = at (N/b2) + (N/B) k

...

So T (n) = a (at (N/b2) + (N/B) k) + NK = NK (1 + A/BK +... + (A/BK) H) = (NK-nlogba)/(1-A/BK). Next we will discuss the relationship between A and BK and determine whether it is NK or nlogba, if it is 1, it is nklogbn.

Simple proof, but not accurate;

 

Here is an example of replacement:

The divide and conquer method is a common design method in algorithm design, which can greatly improve the time complexity of the algorithm ~ The idea of sub-governance is very simple, that is, dividing a problem into two or more independent sub-problems. The solutions for sub-problems are the same, after the sub-problem is solved, the sub-problem is combined into a larger result by combining algorithms. Therefore, the sub-rule algorithm has three steps: divide) and conquer (generally, sub-problems are the same independently, so here the sub-problems are solved recursively) and combine (when the sub-problem is raised to a greater problem, it is necessary to merge the sub-problem solutions ). The divide and conquer algorithms still need different divide and conquer schemes to deal with different problems. Therefore, it is important to master the idea of narrowing down the problem, more advanced dynamic planning and greed also increase the time complexity by narrowing down the problem scale. Therefore, I feel that I still have more knowledge about the sub-governance solutions for specific instances. When encountering a strange problem, I can move closer like a familiar problem. So I will analyze some sub-governance instances in the introduction to algorithms. (How does map-reduce work ?)

The following content: 1 merge sort 2 binary search 3 Fibonacci approximate evaluate 4 big number multiply 5 matrix multiplication 6 closest point

1. Merge and sort

Divide: A digital sequence is divided into two parts, each n/2

Conquer: Merge-sort is used to split two subproblems. The nature of the subproblem is the same as that of the parent problem, so it can be called recursively.

Combine: merges two sorted arrays. Both arrays start with a pointer and then read back. The minimum value is obtained each time, in this way, the linear time can merge two sorted arrays into a large array.

Implementation Code:

// Because the merge algorithm requires extra space, the algorithm is dynamically opened before execution, and the prepared Algorithm
Bool premergesort (unsigned int * array, int begin, int end) {unsigned int * arrayassit = new unsigned int [end-begin + 1]; mergesort (array, arrayassit, begin, end); Delete [] arrayassit; return true ;}
// The main program part of merge, which is divided into the first half of merge-sort and the second half of merge-sort.
Bool mergesort (unsigned int * array, unsigned int * arrayassit, int begin, int end) {// recursive end Condition
If (END = begin) {return true;} int mid = (begin + end)/2; mergesort (array, arrayassit, begin, mid); mergesort (array, arrayassit, mid + 1, end); // merge parts
Merge (array, arrayassit, begin, mid, end); Return true;} // The merged part after recursion. The idea is that two indicators indicate the starting position of the two arrays to be merged, compare the size. Each time a small element is output, the corresponding indicator ++ is returned. If an output is complete, all other elements are output, and the output is a merged array.
Bool Merge (unsigned int * array, unsigned int * arrayassit, int begin, int mid, int end) {int I, j, k; I = begin; j = Mid + 1; k = begin; // The process of relatively small output
While (I <= Mid & J <= END) {If (array [I] <= array [J]) {arrayassit [k ++] = array [I ++];} else {arrayassit [k ++] = array [J ++] ;}// an output is complete, other outputs
While (I <= mid) {arrayassit [k ++] = array [I ++];} while (j <= end) {arrayassit [k ++] = array [J ++];} // copy a temporary array to the original array.
Memcpy (array + begin, arrayassit + begin, (end-begin + 1) * sizeof (array [0]); Return true ;}

Time Complexity Analysis:

Divide: Average (1)

Conquer: 2 T (n/2)

Combine: sums (N)

Recursion

T (n) = 2 T (n/2) + hour (n) + hour (1) n> 1

= 1 n = 1

According to the main theorem, T (n) = Week (nlgn)

2. Binary Search

Input ordered array ~

Divide: searches for elements in the intermediate position to compare the size.

Conquer: divide the problem into left-side or right-side queries based on the above ~

Combine: no merging steps for this problem

 

Time Complexity Analysis:

Divide: Average (1)

Conquer: T (n/2)

Combine: extract (1)

Recursion

T (n) = T (n/2) + hour (1) + hour (1)

According to the main theorem, T (n) = cosine (lgn)

3. Fibonacci Series

The division algorithm of the Fibonacci series is based on the next approximation algorithm and the next matrix calculation method:

The idea of divide governance is to divide and conquer pow. For example, POW (A, n) can be divided into POW (A, n/2) * POW (A, n/2)

Divide: Average (1)

Conquer: T (n/2)

Combine: Multiplication (1)

From the main theorem: T (n) = cosine (lgn)

Code implementation:

 

// Recursion on the left
Double leftn (int n) {If (n = 1) {return (1 + SQRT (double) 5)/2 ;} if (n> = 2 & N % 2! = 0) {double TMP = leftn (n-1)/2); Return TMP * (1 + SQRT (double) 5)/2 ;} if (n> = 2 & N % 2 = 0) {double TMP = leftn (n/2); Return TMP * TMP ;}// recursion on the right
Double rightn (int n) {If (n = 1) {return (1-sqrt (double) 5)/2;} If (N % 2! = 0) {double TMP = rightn (n-1)/2); Return TMP * (1-sqrt (double) 5)/2 ;} if (N % 2 = 0) {double TMP = rightn (n/2); Return TMP * TMP ;}// approximate evaluate
Double maid (int n) {return (leftn (N)-rightn (N)/SQRT (double) 5 ));}

  

 

Recursive similarity is the same as that of the POW sub-governance of the matrix.

Code implementation:

Rect fibonaccirect (int n) {If (n = 0) {// 1 // 1 0rect. x01 = 0; rect. x02 = 0; rect. x11 = 0; rect. x12 = 0; return rect;} If (n = 1) {// 1 1 // 1 0rect. x01 = 1; rect. x02 = 1; rect. x11 = 1; rect. x12 = 0; return rect;} // even if (n> = 2 & N % 2 = 0) {tmprect1 = maid (n/2); rect. x01 = tmprect1.x01 * tmprect1.x01 + tmprect1.x02 * tmprect1.x11; rect. x02 = tmprect1.x01 * tmprect1.x02 + tmprect1.x02 * tmprect1.x12; rect. x11 = tmprect1.x11 * tmprect1.x01 + tmprect1.x12 * tmprect1.x11; rect. x12 = tmprect1.x11 * tmprect1.x02 + tmprect1.x12 * tmprect1.x12; return rect;} // odd if (n> 2 & N % 2 = 1) {tmprect1 = maid (n-1) /2); tmprect2.x01 = tmprect1.x01 * percent + percent * percent; percent = percent * percent + tmprect1.x02 * tmprect1.x12; percent = percent * tmprect1.x01 + tmprect1.x12 * tmprect1.x11; tmprect2.x12 = tmprect1.x11 * tmprect1.x02 + tmprect1.x12 * tmprect1.x12; rect. x01 = tmprect2.x01 + tmprect2.x02; rect. x02 = tmprect2.x01; rect. x11 = tmprect2.x11 + tmprect2.x12; rect. x12 = tmprect2.x11; return rect ;}}

Time Complexity:

Divide: Average (1)

Conquer: T (n/2)

Combine: Multiplication (1)

From the main theorem: T (n) = cosine (lgn)

4. Divide and conquer method to multiply large Integers

C = a * B = (A1*10n/2 + a0) * (b1*10n/2 + B0) = (A1 * B1) 10n + (A1 * b0 + B1 * a0) 10n/2 + B0 * a0

= C0 * 10n + C1 * 10n/2 + C2

If C1 uses the preceding (A1 * b0 + B1 * a0) for calculation,

Algorithm analysis is as follows:

Divide: Average (1)

Conquer: 4 T (n/2)

Combine: Multiplication (1)

From the main theorem: T (n) = entropy (N2)

The time complexity is not reduced here, because the number of multiplication times is not reduced, so C1 is changed to the following calculation: (A1 + a0) * (b1 + B0)-(C0 + C2)

The analysis is as follows:

Divide: Average (1)

Conquer: 3 T (n/2)

Combine: Multiplication (1)

Theorem: T (n) = random (n1.585)

5. Matrix Multiplication

The division of matrix multiplication is similar to the above, but also the reduction of the number of multiplication. Common matrix Division:

T (n) = 8 T (n/2) + hour (N2)

The primary theorem T (n) = least (N3), without increasing the time complexity

This reduces the number of multiplication:

T (n) = 7 T (n/2) + hour (N2)

Primary theorem T (n) = cosine (n2.81 ),

6. Nearest point

The separation idea of the latest vertices is easy to get, that is, separating the middle and finding the shortest vertices on both sides, but the difficulty here lies in merging, during merging, there may be a recent point that spans two regions. improper selection of merging time complexity will lead to an increase in the time complexity of the entire algorithm;

When the recursion end condition is 3 or 3, it is enough to solve the problem;

T (n) = 2 T (n/2) + ?, Question mark and the time of merging. If the algorithm wants to achieve the time complexity of nlgn, what is here? Must be linear time!Note: This is the focus ~

When merging, consider only the following band areas: Here, D is the smallest distance between the two sides, and the merge in the middle may only appear smaller than this distance, so you need to look for the band area

However, if each point is violent, the worst case may be the time complexity of N2;

So consider one of the following points to see which points need to be traversed to see if the number of points that can be traversed by a point can be reduced. The above is the point that traverses all the band areas, here we can find that we can divide a region based on a certain point:

For a vertex on the left, there are a maximum of six vertices in the rectangle on the right, so you only need to traverse six vertices at a time, these six points are the six closest points of Y coordinate to a certain point on the left;

 

A maximum of six vertices in the left-side range are allowed. It can be inferred that if the distance between the seven vertices on the right side of the two squares is less than delta, for example, QPS, the distance between the QPS and the four vertices of the following squares is less than delta, and Delta is S.L and SThe minimum point-to-point distance in R is inconsistent. Therefore, you do not need to find the distance between the P point on the left and all the points in the right dotted box. You only need to calculate the six points closest to the Y coordinate of the P point in the SR to find the closest point, saves the number of comparisons. The key to linear merge Time is to keep an array sorted by Y and use the idea of preprocessing and merge ~

The Code is as follows:

 

// The principle is Merge Sorting. Void Merge (point y [], point M [], int begin, int end, int mid) {int I, J, K; for (I = begin, j = Mid + 1, K = begin; I <= Mid & J <= end;) {// start with I on the left, start from J on the right, start from begin, start from Mid + 1 // then compare the relationship between I and j. If J is small, then, move J to the I side, and J ++ // M stores the sorted if (M [I]. y> M [J]. y) {Y [k ++] = m [J]; j ++;} else {Y [k ++] = m [I]; I ++ ;}} while (I <= mid) y [k ++] = m [I ++]; while (j <= end) Y [k ++] = m [J ++]; // copy the sorted m to Y in memcpy (m + begin, Y + beg In, (end-begin + 1) * sizeof (Y [0]);} double closepair (point X [], point y [], point M [], int begin, int end, point & PX, point & Py) {// return 0if (end-begin = 0) {return 0 ;} // If (end-begin <= 2) {return enumpair (x, begin, end, PX, Py );} // int mid = (begin + end)/2; int I, j, k; double DL, Dr, DM; // here, we start to divide the array y sorted by Y into y sorted by Y on the left and Y sorted by Y on the right. // we use for (I = begin, j = begin, k = Mid + 1; I <= end; I ++) {If (Y [I]. index <= m ID) {M [J ++] = Y [I];} else {M [k ++] = Y [I] ;}} // recursive DL = closepair (x, M, Y, begin, mid, PX, Py); DR = closepair (x, M, Y, Mid + 1, end, PX, PY); DM = min (DL, Dr); // merge the preceding y into Merge (Y, M, begin, end, mid ); // locate the part of the Y-based band for (I = begin, K = begin; I <= end; I ++) {If (FABS (Y [I]. x-X [Mid]. x) <DM) {M [k ++] = Y [I] ;}/// then traverse the shortest of the band y' and then merge for (I = begin; I <K; I ++) {for (j = I + 1; j <K & M [J]. y-M [I]. Y <DM; j ++) {double TMP = compudis (M [I], M [J]); If (TMP <(dm <shortest? DM: Shortest) {// record the minimum distance and closest vertex shortest = TMP; PX. X = m [I]. x; PX. y = m [I]. y; py. X = m [J]. x; py. y = m [J]. Y ;}}return shortest ;}

When we look for six points in the above algorithm, we sort them and find the six points of the distance linearly. Therefore, we did not use the above proof, but kept the linear nature. You can use the properties of six vertices in the sorted array directly.

The merge algorithm of the closest point is complicated. The difficulty of splitting is the difficulty of merging, which limits the time complexity of the algorithm.

T (n) = 2 T (n/2) + hour (N)

So T (n) = nlgn!

 

Correct the error ~ Please indicate the source for reprinting. Thank you.

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.