In algorithm analysis, when an algorithm includes recursive calls, the analysis of its time complexity is converted into a recursive equation. In fact, this problem is a mathematical solution to the gradual order, and the recursive equation is in a variety of forms, the solution method is also not enough, than the limit is often used in the following four methods:
(1) Substitution Method)
The basic step of the generation method is to first guess the explicit solution of the recursive equation, and then use mathematical induction to verify whether the solution is reasonable.
(2) iteration method)
The basic step of the iteration method is to expand the right end of the recursive equation iteratively to form a non-recursive sum. Then, the prediction of the left end of the equation is the prediction of the solution of the equation through the prediction of the sum.
(3) apply the public method (Master method)
This method is applicable to recursive equations like T (n) = at (N/B) + f (n. Such a recursive equation is a recursive relationship that is satisfied by the time complexity of the division and control method. That is, a problem with a scale of N is divided into a sub-Problem with a scale of N/B, recursively solve this sub-problem, and then obtain the solution of the original problem through the synthesis of the sub-question.
(4) difference formula method)
Some recursive equations can be regarded as difference equations, and the method for solving difference equations can be used to solve recursive equations, and then the solution is estimated to be of an approximate order.
Below are some examples of the above methods.
I. Proxy Method
The recursive equation of the time calculated by the multiplication of big integers is: T (n) = 4 T (n/2) + O (N), where T (1) = O (1 ), we recommend a solution T (n) = O (n2). According to the definition of symbol O, for N> N0, T (n) <cn2-EO (2n) (Note: here we subtract O (2n), because it is a low-order term, it does not affect the approximation of N when it is large enough.) We can substitute this solution into the recursive equation to obtain:
T (n) = 4 T (n/2) + O (N)
≤4c (n/2) 2-EO (2n/2) + O (N)
= Cn2-EO (n) + O (N)
≤ Cn2
Where C is a positive constant, e is 1, and the above formula complies with the definition of T (n) ≤ cn2, we can think that O (n2) is a solution of T (N, it is proved by mathematical induction.
Ii. Iterative Method
The calculation time of an algorithm is T (n) = 3 T (N/4) + O (N), where T (1) = O (1 ), after two iterations, the right side can be expanded:
T (n) = 3 T (N/4) + O (N)
= O (n) + 3 (O (N/4) + 3 T (N/42 ))
= O (n) + 3 (O (N/4) + 3 (O (N/42) + 3 T (N/43 )))
We can see from the above formula that this is a recursive equation. We can write the equation after iteration I:
T (n) = O (n) + 3 (O (N/4) + 3 (O (N/42) +... + 3 (n/4I + 3 T (N/4I + 1 ))))
When N/4I + 1 = 1, t (N/4I + 1) = 1, then
T (n) = N + (3/4) + (32/42) N +... + (3i/4I) N + (3i + 1) T (1)
<4n + 3I + 1
According to N/4I + 1 = 1, I <log4n
3i + 1 ≤ 3log4n + 1 = 3log3n * log43 + 1 = 3nlog43
Entered as follows:
T (n) <4n + 3nlog43, that is, T (n) = O (n ).
Iii. Apply the formula
This method is expected as follows:
T (n) = at (N/B) + f (N)
Where a ≥1 and B ≥1 are constants, and F (n) is a definite positive function. In the three scenarios of F (N), we have the approximate Prediction Formula of T (n:
1. If F (n) = O (nlogba-ε) exists for a constant ε> 0, T (n) = O (nlogba)
2. If F (n) = O (nlogba), T (n) = O (nlogba * logn)
3. if F (n) = O (nlogba + ε), and for a positive integer n of a constant C> 1 and all sufficiently large, there is AF (N/B) ≤ CF (N), T (n) = O (f (n )).
If T (n) = 4 T (n/2) + N, then a = 4, B = 2, F (n) = N, the nlogba = nlog24 = n2, F (n) = n = O (n2-ε), then ε = 1. According to the 1st cases, we get T (n) = O (n2 ).
The three types of situations involved here are compared by F (N) and nlogba, and the gradual order of the solution of the recursive equation is determined by the greater ones of the two functions. In the first case, if the function nlogba is large, T (n) = O (nlogba). In the third case, the function f (n) is large, T (n) = O (f (n); in the second case, if the two functions are the same size, T (n) = O (nlogba * logn ), that is, the logarithm of N is used as a factor to multiply the same order of F (N) and T (n.
However, the above three situations do not cover all possible F (n ). There is a gap between the first case and the second case: F (n) is smaller than but not polynomial less than nlogba, and this also exists between the second and third cases, in this case, the formula is not applicable.
Time Complexity Analysis of Recursive Algorithms