In algorithm analysis, when an algorithm contains a recursive call, the analysis of its time complexity is converted into a recursive equation. In fact, this problem is a mathematical solution to the gradual order, and the recursive equation has a variety of forms, the solution method is also not enough, the more commonly used are the following four methods:
(1) Substitution Method)
The basic step of the generation method is to first speculate the explicit solution of the recursive equation, and then use mathematical induction to verify whether the solution is reasonable.
(2) iteration method)
The basic step of the iteration method is to expand the right end of the recursive equation iteratively to form a non-recursive sum, and then estimate the left end of the equation, that is, the solution of the equation, through the estimation of the sum.
(3) apply the public method (Master method)
This method is applicable to shapes such as "T (n) = at (N/B) +
F (n) "recursive equation. This recursive equation is a recursive relationship satisfied by the time complexity of the divide and conquer method. That is, a subproblem with a scale of N is divided into a subproblem with a scale of N/B, recursively solving this subproblem.
Then, the problem is solved by combining the solutions of the sub-question.
(4) difference formula method)
Some recursive equations can be regarded as difference equations, and the method for solving difference equations can be used to solve recursive equations, and then the solution is estimated at an approximate order.
The following describes some examples of the above methods.
I. Proxy Method
The recursive equation of the time calculated by the multiplication of big integers is: T (n) = 4 T (n/2) + O (N), where T (1) = O (1 ), we guess a solution T (n) = O (n2
), According to the definition of the symbol O, for N> N0, there is T (n) <cn2
-EO (2n) (Note: here we subtract O (2n) because it is a low-order term and does not affect the approximation of N when n is large enough). Then we substitute this solution into the recursive equation and obtain:
T (n) = 4 T (n/2) + O (N)
≤4c (n/2) 2
-EO (2n/2) + O (N)
= Cn2
-EO (n) + O (N)
≤ Cn2
Where, C is a positive constant, e is 1, and the above formula conforms to T (n) ≤ cn2
Can be considered as O (n2
) Is a solution of T (n), which is proved by mathematical induction.
Ii. Iterative Method
The calculation time of an algorithm is T (n) = 3 T (N/4) + O (N), where T (1) = O (1 ), after two iterations, the right side can be expanded:
T (n) = 3 T (N/4) + O (N)
= O (n) + 3 (O (N/4) + 3 T (N/42
))
= O (n) + 3 (O (N/4) + 3 (O (N/42)
) + 3 Tb (N/43
)))
We can see from the above formula that this is a recursive equation. We can write the equation after iteration I:
T (n) = O (n) + 3 (O (N/4) + 3 (O (N/42)
) +... + 3 (n/4I
+ 3 T (N/4I + 1
))))
When N/4I + 1
= 1, t (N/4I + 1
) = 1, then
T (n) = N + (3/4) + (32
/42
) N +... + (3i
/4I
) N + (3i + 1
) T (1)
<4n + 3I + 1
However, N/4I + 1
= 1: I <log4
N, thus
3i + 1
≤3log4
N + 1
= 3log3
N * log4
3
+ 1 = 3nlog4
3
Entered as follows:
T (n) <4n + 3nlog4
3, that is, T (n) = O (n ).
Iii. Apply the formula
This method is like:
T (n) = at (N/B) + f (N)
Where a ≥1 and B ≥1 are constants, and F (n) is a definite positive function. In the three scenarios of F (N), we have an approximate formula of T (n:
1. For a constant ε> 0, F (n) = O (nlogb
A-ε
), Then t (n) = O (nlogb
A
)
2. If F (n) = O (nlogb
A
), Then t (n) = O (nlogb
A
* Logn)
3. If F (n) = O (nlogb
A + ε
), And for a constant C> 1 and all fully large positive integers n, with AF (N/B) ≤ CF (N), then t (N) = O (f (n )).
If T (n) = 4 T (n/2) + N, then a = 4, B = 2, F (n) = N, the nlogb is calculated.
A
= Nlog2
4
= N2
While F (n) = n = O (n2-ε
). At this time, ε = 1. According to the 1st cases, we get T (n) = O (n2
).
The three situations involved here are f (N) and nlogb.
A
The approximation of the recursive equation is determined by the larger of the two functions. In the first case, the function nlogb
A
Otherwise, T (n) = O (nlogb
A
); In the third case, if the function f (n) is large, then t (n) = O (f (n); in the second case, the two functions are as big, then T (n) = O (nlogb
A
* Logn), that is, the logarithm of N is used as a factor to multiply the same order of F (N) and T (n.
However, none of the above three situations cover all possible F (n ). There is a gap between the first case and the second case: F (n) is smaller than but not polynomial less than nlogb
A
This also exists between Category 2 and Category 3. At this time, the formula is not applicable.