Why should we understand the efficiency of the algorithm?
In general, programming is to solve the problem by substituting various known algorithms into their own code. Therefore, understanding the efficiency of various algorithms is very helpful for us to choose a suitable algorithm.
What determines the efficiency of the algorithm?
From the theory of algorithmic analysis, the efficiency of the algorithm is usually evaluated by their complexity, which is indicated by the asymptotic notation (asymptotic notation), usually with O, θ and ω notation. The incremental meaning is that when the scale of the problem becomes larger, the time it takes to solve the problem increases.
(Note: When the scale is small, whether it is an efficient algorithm or inefficient algorithm, time-consuming gap is not obvious, it is likely to produce misleading results.) So algorithm analysis for large-scale input. )
How is the efficiency of the algorithm measured?
An algorithm is composed of control structure (order, branch and Loop 3) and basic operation (refers to the operation of the intrinsic data type), the running time of the algorithm is proportional to the execution times of the statement in the algorithm, and the number of statements executed in an algorithm is more, it takes more time to run. Comparing the efficiency of the different algorithms of the same problem, it is common practice to choose the basic operation of the algorithm, with the repetition of its basic operation as the time measurement of the algorithm, which is recorded as the time frequency t (n). n is the size of the problem, and when N is constantly changing, T (n) changes constantly. In general, the number of times a basic operation is repeated in the algorithm is a function of the problem size n, denoted by T (N). If there is an auxiliary function f (n), when n approaches infinity, the limit value of T (n)/f (n) is a constant that is not equal to zero, then f (n) is the same order of magnitude function of T (n), which is recorded as T (N) =o (f (n)). Called O (f (n)) is the progressive time complexity of the algorithm, referred to as the time complexity.
Several scenarios in which the algorithm runs (for example, to find an element in a sequence of n elements):
- Best case: found for the first time, indicated as O (1)
- Desired condition (expected case): M found in the middle of a series, denoted by O (m)
- Worst case scenario (worst cases): found at the end of the sequence, denoted by O (n)
In practice, we generally only consider the worst-case operation of the algorithm, that is, for the input of the size n, the maximum running time of the algorithm. the reasons for this are:
- The worst-case run time of an algorithm is an upper bound (Upper Bound) of the run time under any input, no matter what else, the run time will not be longer
- For some algorithms, the worst-case scenario occurs more frequently, such as retrieving a record in the database that does not actually exist.
- Many times, the expectation of the algorithm is as bad as the worst case. For example, the insertion sort is in the worst case (the array is in reverse order) and the average (assuming half reverse), the complexity is O (n2)
The large o notation (Big O notation,o stands for Omicron, the 15th word of the Greek alphabet) is defined as:
For input of size n, when n increases, the time-consuming upper bound of the function is added to run.
Types of algorithmic complexity:
- O (1)---constant complexity (constant complexity)
- O (n)---linear complexity (linear complexity)
- O (n2)---Two dimensional complexity (quadratic complexity)
- O (log n)--- logarithmic complexity (logarithmic complexity)
- O (CN)---index Complexity ((exponential complexity)
Others are O (N3) Three-dimensional complexity, O (n!) Factorial complexity and so on.
Use examples to illustrate different types of algorithmic complexity:
Suppose we are now going to write a piece of code ourselves to calculate the result of AB (b is a positive integer).
Method One:
def Exp1 (A, B): ans=1 while b>0: ans*=a b-=1 return ans
The basic steps of this algorithm are: 3b+2 step (There are 3 steps in each loop, a total of loops B, plus the initial ans assignment and the return ANS value two steps). When B is big enough, the other numbers don't matter, so the algorithm is a linear complexity algorithm.
Method Two:
def Exp2 (A, b ): if b==1: return a return a*exp2 (a,b-1)
The basic steps of this algorithm are: 3b-1 step (omit the extrapolation process here, see the reference video at the 12th minute). Therefore, this algorithm is also a linear complexity algorithm.
Method Three:
def Exp3 (A, b ): if b==1: return a if b%2==0 : # if B is an even number return (a*a) * * (b/2 )else: # if B is odd, The b of a is equal to a*a** (b-1) return a*exp3 (a,b-1)
The basic steps of this algorithm are: Log b step (omit the extrapolation process here, see the reference video at the 17th minute). Therefore, this algorithm is a logarithmic complexity algorithm.
Method Four:
def Exp4 (A, B): ans= 0 for in range (a ):for in Range (b): ans+=1 return ans
The basic steps of this algorithm are: B2 step (omit the extrapolation process here, see the reference video at the 20th minute). Therefore, this algorithm is an algorithm of two-dimensional complexity.
Time-to-growth comparisons of complexity of different types of algorithms:
O (1)---input increased by 10 times times, the time to resolve the problem is constant
O (n)---input increased by 10 times times, the time to solve the problem increased by 10 times times
O (log n)---input increased by 10 times times, the time to solve the problem increased by 1 time times
O (n2)---input increased 10 times times, the time to solve the problem increased by 100 times times
Reference: MIT Open Class: Introduction to Computer Science and Programming (8th lesson)
Measuring the efficiency of the algorithm with the large O-notation (algorithm efficiency asymptotic notation Big O notation)