The time complexity and spatial complexity of the algorithm are called the complexity of the algorithm.
1. Complexity of Time
(1) Time frequency The time spent in an algorithm execution, theoretically, can not be calculated, must be on the machine to run the test to know. But we can not and do not need to test each algorithm, just know which algorithm spends more time, which algorithm spends less time on it. And the time that an algorithm spends is proportional to the number of executions of the statement in the algorithm, which algorithm takes more time than the number of statements executed. The number of times a statement is executed in an algorithm is called a statement frequency or time frequency. Note as T (N). (2) Time complexity in the time frequency mentioned earlier, n is called the scale of the problem, and when N is constantly changing, the time frequency t (n) will change constantly. But sometimes we want to know what the pattern is when it changes. To do this, we introduce the concept of time complexity. Under normal circumstances, the number of iterations of the basic operation of the algorithm is a function of the problem size n, denoted by T (n), if there is an auxiliary function f (n), so that when n approaches infinity, the limit value of T (n)/f (n) is not equal to zero constant, then f (n) is the same order of magnitude function of t As T (n) =o (f (n)), called O (f (n)) is the progressive time complexity of the algorithm, which is referred to as the complexity of time. Time frequency is different, but the time complexity may be the same. such as: T (n) =n2+3n+4 and T (n) =4n2+2n+1 their frequency is different, but the time complexity is the same, all O (N2). Ascending by order of magnitude, common time complexity is: Constant order O (1), Logarithmic order O (log2n), linear order O (n), linear logarithmic order O (nlog2n), square order O (n2), Cubic O (n3),..., K-th square O (NK ), exponential order O (2n). With the increasing of the problem scale N, the complexity of the time is increasing and the efficiency of the algorithm is less. (3) Worst time complexity and average time complexity worst-case time complexity is called the worst time complexity. In general, it is not particularly stated that the time complexity of the discussion is the worst-case time complexity. The reason for this is that the worst-case time complexity is the upper bound of the algorithm's run time on any input instance, which guarantees that the algorithm will not run longer than any other. In the worst case the time complexity is T (n) =0 (n), which means that the algorithm will not run longer than 0 (n) for any input instance. The average time complexity is the expected run time of the algorithm when all possible input instances are present with equal probabilities. Index order 0 (2n), it is obvious that the algorithm with time complexity of exponential order 0 (2n) is very inefficient and cannot be applied when the n value is slightly larger. (4) Finding time complexity "1"If the execution time of the algorithm does not grow with the increase of the problem size n, even if there are thousands of statements in the algorithm, the execution time is only a large constant." The time complexity of such an algorithm is O (1). x=91; y=100;
while (y>0) if (x>100) {x=x-10;y--;} else x + +;
Answer: T (n) =o (1),
This app looks a little scary and runs 1000 times in total, but did we see n?
Didn't. The operation of this program is independent of N,
Even if it circulates for another 10,000 years, we don't care about him, just a constant order function "2" when there are several loop statements, the time complexity of the algorithm is determined by the frequency f (n) of the most inner statement in the loop statement with the highest number of nested layers. x=1; for (i=1;i<=n;i++) for (j=1;j<=i;j++) for (k=1;k<=j;k++) x + +; The most frequent statement in this program segment is (5), the number of executions in the inner loop is not directly related to the size of the problem, but is associated with the variable value of the outer loop, and the number of outermost loops is directly related to n, so the number of executions from the inner Loop to the outer analysis Statement (5) can be: Then the time complexity of the program segment is t (n) =o (n3/6+ Low) =o (n3) "3" The time complexity of the algorithm depends not only on the scale of the problem, but also on the initial state of the input instance. The algorithm for finding the given value K in the value a[0..n-1] is roughly the following: i=n-1; while (i>=0&& (a[i]!=k)) i--; return i; The frequency of the statements (3) In this algorithm is not only related to the problem size n, It is also related to the value of each element of a in the input instance and the value of K: ① If there is no element equal to K in a, the frequency of the statement (3) f (n) =n;② if the last element of a is equal to K, the frequency f (n) of the statement (3) is the constant 0. (5) Time complexity evaluation performance Two algorithms A1 and A2 solve the same problem, the time complexity is T1 (n) =100n2,t2 (n) =5n3. (1) When the input amount is n<20, there is T1 (n) >t2 (n), which takes less time. (2) with the increase of the problem size n, the ratio of the time cost of two algorithms 5n3/100n2=n/20 also increases with. That is, when the problem scale is large, the algorithm A1 is more effective than the algorithm A2. Their asymptotic time complexity O (n2) and O (N3) Evaluate the temporal quality of the two algorithms on a macroscopic scale. In the algorithm analysis, the time complexity and the asymptotic time complexity of the algorithm are often not distinguished, but the asymptotic time complexity T (n) =o (f (n)) is referred to as the time complexity, and the F (n) is usually the most frequent statement frequency in the algorithm.
2. Complexity of SpaceThe spatial complexity of a program is the amount of memory required to run a program. With the spatial complexity of the program, you can have a pre-estimate of how much memory is needed to run the program. In addition to requiring storage space and storing the instructions, constants, variables, and input data used by the store itself, a program needs some working units to manipulate the data and a secondary space to store some of the information needed for realistic computing. The storage space required for program execution consists of the following two parts. (1) Fixed part. The size of this part of the space is independent of the number of input/output data. It mainly includes space occupied by instruction space (i.e. code space), data space (constants, simple variables), etc. This part belongs to the static space. (2) Variable space, this part of the space mainly includes the dynamic allocation of space, as well as the space required for the recursive stack. The spatial size of this part is related to the algorithm. The storage space required for an algorithm is expressed in F (N). S (n) =o (f (n)) where n is the size of the problem, S (n) represents spatial complexity.
Algorithm time complexity and space complexity detailed