Analysis of time complexity of Recursive Algorithms

Source: Internet
Author: User

In algorithm analysis, when an algorithm contains a recursive call, the analysis of its time complexity is converted into a recursive equation. In fact, this problem is a mathematical solution to the gradual order, and the recursive equation has a variety of forms, the solution method is also not enough, the more commonly used are the following four methods:

(1) Substitution Method)

The basic step of the generation method is to first speculate the explicit solution of the recursive equation, and then use mathematical induction to verify whether the solution is reasonable.

(2) iteration method)

The basic step of the iteration method is to expand the right end of the recursive equation iteratively to form a non-recursive sum, and then estimate the left end of the equation, that is, the solution of the equation, through the estimation of the sum.

(3) apply the public method (Master method)

This method is applicable to recursive equations like T (n) = at (N/B) + f (n. This recursive equation is a recursive relationship satisfied by the time complexity of the divide and conquer method. That is, a problem with a scale of N is divided into a sub-Problem with a scale of N/B, recursively solve this sub-problem, and then obtain the solution of the original problem through the synthesis of the sub-question.

(4) difference formula method)

Some recursive equations can be regarded as difference equations, and the method for solving difference equations can be used to solve recursive equations, and then the solution is estimated at an approximate order.

The following describes some examples of the above methods.

I. Proxy Method

The recursive equation of the time calculated by the multiplication of big integers is: T (n) = 4 T (n/2) + O (N), where T (1) = O (1 ), we guess a solution T (n) = O (n2). According to the definition of the symbol O, for N> N0, there is T (n) <cn2-EO (2n) (note, here, we subtract O (2n) because it is a low-order term and does not affect the approximation of N when n is large enough.) We substitute this solution into the recursive equation and obtain:

T (n) = 4 T (n/2) + O (N)
≤4c (n/2) 2-EO (2n/2) + O (N)
= Cn2-EO (n) + O (N)
≤ Cn2

Where C is a positive constant, e is 1, and the above formula complies with the definition of T (n) ≤ cn2, it can be considered that O (n2) is a solution of T (N, it is proved by mathematical induction.

Ii. Iterative Method

The calculation time of an algorithm is T (n) = 3 T (N/4) + O (N), where T (1) = O (1 ), after two iterations, the right side can be expanded:

T (n) = 3 T (N/4) + O (N)
= O (n) + 3 (O (N/4) + 3 T (N/42 ))
= O (n) + 3 (O (N/4) + 3 (O (N/42) + 3 T (N/43 )))

We can see from the above formula that this is a recursive equation. We can write the equation after iteration I:

T (n) = O (n) + 3 (O (N/4) + 3 (O (N/42) +... + 3 (n/4I + 3 T (N/4I + 1 ))))

When N/4I + 1 = 1, t (N/4I + 1) = 1, then

T (n) = N + (3/4) + (32/42) N +... + (3i/4I) N + (3i + 1) T (1)
<4n + 3I + 1

According to N/4I + 1 = 1, I <log4 N

3i + 1 ≤ 3log4 n + 1 = 3log3 N * log4 3 + 1 = 3nlog4 3

Entered as follows:

T (n) <4n + 3nlog4 3, that is, T (n) = O (n ).

Iii. Apply the formula

This method is like:

T (n) = at (N/B) + f (N)

Where a ≥1 and B ≥1 are constants, and F (n) is a definite positive function. In the three scenarios of F (N), we have an approximate formula of T (n:

1. If F (n) = O (nlogb A-ε) exists for a constant ε> 0, T (n) = O (nlogb)

2. If F (n) = O (nlogb A), T (n) = O (nlogb A * logn)

3. if F (n) = O (nlogb A + ε), and for a constant C> 1 and all sufficiently large positive integers n, there is AF (N/B) ≤ CF (N), T (n) = O (f (n )).

If T (n) = 4 T (n/2) + N, then a = 4, B = 2, F (n) = n, it is calculated that nlogb A = nlog2 4 = n2, while F (n) = n = O (n2-ε). At this time, ε = 1, according to the 1st cases, we get T (n) = O (n2 ).

The three types of situations involved here are compared by F (N) and nlogb A, and the gradual order of the solution of the recursive equation is determined by the greater ones of the two functions. In the first case, if the function nlogb A is large, T (n) = O (nlogb A); in the third case, the function f (n) is large, T (n) = O (f (n); in the second case, if the two functions are the same big, T (n) = O (nlogb A * logn ), that is, the logarithm of N is used as a factor to multiply the same order of F (N) and T (n.

However, none of the above three situations cover all possible F (n ). There is a gap between the first case and the second case: F (n) is smaller than but not polynomial less than nlogb A, and this also exists between the second and third cases, in this case, the formula is not applicable.

 

Reproduced in: http://blog.csdn.net/metasearch/article/details/4428865

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.