2.8 function increase progressively
Let's determine which of the two algorithms A and B is better. Assume that the input scale of the two algorithms is N, and algorithm a performs 2n + 3 operations. You can understand that there is a first N cycles, and after the execution is complete, there is another N cycles, there are three assignments or operations, 2n + 3 operations in total. Algorithm B needs to perform 3 N + 1 operation. Who do you think they are faster?
To be accurate, the answer is not certain (as shown in Table 2-8-1 ).
When n = 1, algorithm A is less efficient than algorithm B (times more than algorithm B ). When n = 2, the efficiency is the same. When n> 2, algorithm A is superior to algorithm B. As N increases, algorithm A is getting better than algorithm B (execution times are less than B ). So we can conclude that algorithm A is better than algorithm B in general.
In this case, we provide a definition that the input scale N is always greater than the other function if it exceeds the value n without limit. We call a function an increasing progressively.
The gradual growth of functions: Given two functions F (N) and g (N), if there is an integer N, for all N> N, F (N) it is always larger than G (N), so we say that F (n) is growing faster than G (n ).
As N increases, the plus 3 or plus 1 does not affect the final algorithm changes, for example, algorithm a' and algorithm B '. Therefore, we can ignore these addition constants. In the following example, the meaning of such constants being ignored may be more obvious.
Let's look at the second example. Algorithm C is 4n + 8, and algorithm D is 2n2 + 1 (as shown in Table 2-8-2 ).
When n <= 3, algorithm C is inferior to algorithm D (because algorithm C has many times), but after N> 3, the advantage of algorithm C is more and more superior to algorithm D, and later it is far better. When the constant is removed, we find that the result has not changed. We even observed that even if the constant multiplied by N is removed, the result is not changed. The number of times of algorithm C' increases with N, which is far smaller than that of algorithm d '. That is to say, the constant multiplied by the maximum term is not important.
Let's look at the third example. Algorithm E is 2n2 + 3N + 1, and algorithm F is 2n3 + 3N + 1 (as shown in Table 2-8-3 ).
When n = 1, Algorithm E has the same result as algorithm F, but after n> 1, the advantage of Algorithm E is better than algorithm F. As N increases, the difference is very obvious. Through observation, we found that the maximum index of the item is large. As N increases, the results also become very fast.
Let's look at the last example. Algorithm G is 2n2, algorithm H is 3n + 1, and algorithm I is 2n2 + 3N + 1 (as shown in Table 2-8-4 ).
This set of data should be clearly viewed. When the value of N is getting bigger and bigger, you will find that 3N + 1 cannot be compared with the result of 2n2, and it is almost negligible in the end. That is to say, with the N value becoming very large, algorithm G is actually very close to algorithm I. Therefore, we can draw a conclusion that when determining the efficiency of an algorithm, constants and other secondary items in the function can often be ignored, but more attention should be paid to the order of the primary item (the highest-order item.
To judge whether an algorithm is good or not, we cannot make an accurate judgment by using only a small amount of data. Based on the previous examples, we found that if we can compare the increment of the key execution frequency functions of these algorithms, we can basically find that an algorithm, as N increases, it is better than another algorithm, or worse than another algorithm. This is actually the theoretical basis of the estimation method in advance, and uses the algorithm time complexity to estimate the algorithm time efficiency.