Recursion and Iteration _2 2016.4.22

Source: Internet
Author: User

Viii. Recursive elimination

According to the idea of recursion, we can understand and grasp the essence of application problem from the macroscopic.

Deep excavation and insight into the main contradiction and general patterns of algorithmic processes

And finally design and write a simple and elegant and precise and compact algorithm


However, recursive patterns are not perfect, and there are some costs behind their many advantages.


(1) Space cost

First, it is not difficult to see from the perspective of recursive tracking analysis that the amount of space consumed by the recursive algorithm depends primarily on the recursive depth

Therefore, compared with the iterative version of the same algorithm, recursive version often requires more space, and thus affect the actual operating speed


Also, in terms of operating systems, it takes a lot of extra time to create, maintain, and destroy recursive instances to implement recursive calls, which can add to the burden of computing


For this reason, the recursive algorithm should be rewritten as an equivalent non-recursive version when the requirements for running speed are very high and the storage space needs to be carefully calculated.


(2) tail recursion and its elimination

In a linear recursive algorithm, if a recursive invocation occurs exactly as the last step in a recursive instance, it is called tail recursion (tail recursion)

For example, the last step of the code reverse (num, low, high) algorithm is to recursively invert the sub-array of two cells after the first and last element is removed, which is the typical tail recursion

In fact, an algorithm that belongs to the tail recursive form can be easily converted to an equivalent iteration version


void Reverse (int* num, int low, int.) {while    (Low < high) {        swap (num[low++], num[high--]);}    }

Please note that the judgment of the tail recursion should be based on the analysis of the actual execution process of the algorithm, not just the external grammatical form of the algorithm.

For example, a recursive statement appears in the last line of the code body and is not necessarily recursive

Strictly speaking, only if any instance of the algorithm (except the trivial recursive base) terminates in this recursive call, is the tail recursion

In the case of the linear recursive sum () algorithm, although it seems that the last line is recursive, it is not actually the last operation of the end recursive----essence is the addition operation

Interestingly, the non-recursive conversion method of this kind of algorithm is still the same as the tail recursion.


Nine or two-minute recursion
(1) Divide and conquer

In the face of the application of large-scale input problems, often feeling in the sense of clutter and do not start with you, may as well from the sage grandson's famous sayings to get inspiration----"where the governance of the people, such as cure, score is also"

Yes, one of the effective ways to solve this kind of problem is to decompose it into smaller sub-problems, and then solve them by recursion mechanism respectively.

This decomposition continues until the sub-problem scale is reduced to a trivial situation

This is the so-called divide-and-conquer (Divide-and-conquer) strategy


As with the strategy of reduction and treatment, it is also required to re-express the original problem to ensure that the sub-problem is consistent with the original problem in the interface form.

Since each recursive instance can do multiple recursion, it is called "multi-Recursive" (multi-way-recursion)

The original problem is usually divided into two, so called "binary recursion" (binary recursion)

It should be emphasized that no matter whether it is decomposed into two or more constant sub-problems, there is no real effect on the incremental complexity of the algorithm overall.



(2) Array summation

The following is a divide-and-conquer strategy that solves the problem of array summation again in the binary recursive pattern

The idea of the new algorithm is:

The array is divided in two by the center element, recursively summing the subarray separately, and finally, the sum of the sub-arrays is the total of the original array.


int sum (int num[], int low, int high)  //Array summation algorithm (binary recursive version) {    if (low = = high) {        return num[low];  If the recursive base (interval length has been reduced to 1), return the element directly    } else {  //otherwise (generally low < high), then        int mid = (low + high) >> 1;  With the center element as the boundary, the original interval is divided into a        return sum (num, low, mid) + sum (num, mid+1, high);  Recursively sums each sub-array and then sums it    up}}  //o (high-low-1), linearly proportional to the length of the interval




To analyze its complexity, it is advisable to only examine the length of n = 2^m form

After the algorithm is started by continuous m = log2n recursive call, the length of the array interval is reduced from the first n to 1, and the first recursive base is reached

In fact, when you first arrive at either recursive base, the recursive call that has been performed is always more than a recursive return of M =log2n times

More generally, recursive calls that have been performed are always returned more m-k times than recursively, before the any recursive instance of the interval length 2^k is reached.

Therefore, the recursive depth (that is, the total number of active recursive instances at any one time) does not exceed m+1

Given that each recursive instance requires only constant space, the space occupied by the divisor itself requires only an additional space of O (M + 1) = O (Logn)


The linear recursive sum () algorithm requires an additional space of O (n), and in this case, the new binary recursive sum () algorithm is greatly improved


As with the linear recursive sum () algorithm, only constant time is required for non-recursive computations in each recursive instance here

Recursive instance total 2n-1, so the new algorithm run time is O (2n-1) = O (n), and the linear recursive version of the same


Here each recursive instance can be down-depth recursive two times, it belongs to the recursive recursion of the two

Two-part recursion is very different from the linear recursion that was introduced earlier.

For example, the entire computational process in linear recursion only occurs once in a recursive basis, while in the binary recursion process, the occurrence of recursive base is very frequent, in general, more than half of recursive instances are recursive base


(3) Efficiency

Of course, not all problems are suitable for the use of split-governance strategies

In fact, in addition to recursion, the computational cost of such an algorithm is mainly derived from two aspects

The first is sub-problem division, that is, to decompose the original problem into the same form, smaller size of a number of sub-problems

The second is the solution of the sub-solution, which is derived from the recursive problem, and the whole solution of the original problem is obtained.


In order to make the strategy of divide and conquer really effective, we must not only ensure the calculation of the above two aspects can be realized efficiently, but also ensure that the sub-problems are independent of each other.

----Each sub-problem can be solved independently without the use of raw data or intermediate results with other sub-problems

Otherwise, either the data must be passed between the sub-problems, or the sub-problems need to be called to each other, which in any case will result in a meaningless increase in time and space complexity


(4) Number of Fibonacci: two-point recursion



int Fibonacci (int n)    //Calculates nth of the Fibonacci series (binary recursive version): O (2^n) {    if (N < 2) {        return n;   If the recursive base is reached, the direct value    } else {        return (Fibonacci (n-1) + Fibonacci (n-2));   Otherwise, the first two items are evaluated recursively, and they are positive solutions    }}

This implementation, based on the original definition of the Fibonacci sequence, is not only accurate but also concise and natural

Unfortunately, the strategy of using binary recursion on such occasions is extremely inefficient.

In fact, the algorithm needs to run an O (2^n) time to calculate the nth Fibonacci number

This algorithm of exponential complexity is of no value in the real world.


The time complexity of the algorithm is as high as the exponential magnitude, the reason is that the recursive implementation of the computational process is very repetitive.


(5) Optimization strategy

In order to eliminate recurrence recursive instances in recursive algorithm, a kind of natural thinking and technique can be summarized as follows:


With a certain amount of auxiliary space, after solving the sub-problems, the corresponding answers are recorded in time.


For example, you can start from the original problem from the top, each encounter a sub-problem, all first check whether it has been calculated, in order to pass direct access to record answers, so as to avoid recalculation


It is also possible to derive the solution of each sub-problem from the recursive base and the bottom-up, and then the solution of the original problem.


The former is the so-called watchmaking (tabulation) or memory (memoization) strategy

The latter is the so-called Dynamic programming (programming) strategy


(6) Number of Fibonacci: linear recursion

int pre;int Fibonacci (int n, int& Pre)    //Calculates the nth item of the Fibonacci sequence (linear recursive version) {    if (n = = 0) {   //If the recursive base is reached, then        pre = 1;< c8/>//Direct value: Fibonacci ( -1) = 1,fibonacci (0) = 0        return 0;    } else {    //otherwise        int t = Pre;        Pre = Fibonacci (n-1, t);    Recursive calculation of the first two        return (T + pre);   And that is the positive solution    }}   //using auxiliary variables to record the previous item//fibonacci (7, pre) = 13

Note that another recursive version of the original binary that corresponds to Fibonacci (n-2) is omitted here.

The corresponding answers can be obtained through the pre "access" records directly through a few form parameters.


The algorithm is linear recursive mode, the recursion depth linearly proportional to the input n, before and after a total of only O (n) recursive instances, the cumulative time is not more than O (n)

The algorithm requires the use of an O (n)-sized additional space


(7) Number of Fibonacci: Iteration

The inverse of the linear recursive version of the Fibonacci () algorithm is visible, in which each of the recorded sub-problem answers, will only be used once

Once the algorithm has reached the recursive base, each layer returns one layer at a time, and the answers to the following layers do not have to be retained


If the process of the above-layer return is equivalent to that from the recursive base, the process of solving each sub-problem by the scale of small and large, can adopt the strategy of dynamic programming.


int Fibonacci (int n)    //Calculates the nth item of the Fibonacci sequence (iteration): O (n) {    int pre = 1, ret = 0;   Initialization: Fibonacci ( -1), Fibonacci (0) while    (n > 0) {        ret + = pre;        Pre = Ret-pre;        --n;    }    return ret;}


Only two intermediate variables are used here, recording the current pair of adjacent Fibonacci

The entire algorithm requires only a linear step iteration, and the time complexity is O (n)

More importantly, this version only needs Changshu scale additional space, the space efficiency also has greatly improved


(8)




void max2 (int a[], int low, int. High, int& x1, int& x2)//recursive + divide {if (low+2 = = high) {x1 = low;        x2 = low+1;        if (A[x1] < a[x2]) {swap (x1, x2);            } if (A[x2] < A[high]) {x2 = high;            if (a[x2] > a[x1]) {swap (x2, x1);    }} return;        } else if (low+3 = = high) {x1 = low;        x2 = low+1;        if (A[x1] < a[x2]) {swap (x1, x2);                } for (int i=low+2; i<=high; ++i) {if (A[i] > a[x2]) {x2 = i;                if (a[x2] > a[x1]) {swap (x1, x2);    }}} return;    } int mid = (low + high) >> 1;    int x1l, x2l;    MAX2 (A, Low, Mid, x1l, x2l);    int x1r, X2R;    Max2 (A, Mid, High, x1r, X2R);        if (a[x1l] > a[x1r]) {x1 = x1l; x2 = (a[x2l] > a[x1r])?    X2L:X1R;        } else {x1 = x1r; x2 = (A[X2R] > A[X1L])?    x2r:x1l; }}


Selected from:
Data structure (c + + language version) (third edition) Deng Junhui
Slightly changed


Recursion and Iteration _2 2016.4.22

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.