You don't know how stressful life is if you haven't done a Times-face test.
The time complexity and spatial complexity of the algorithm are called the complexity of the algorithm.
1. Time Frequency:
The time it takes for an algorithm to execute is theoretically impossible to figure out and must be tested on the machine.
But we can not and do not need to test each algorithm, just know which algorithm spends more time, which algorithm spends less time on it.
And the time that an algorithm spends is proportional to the number of executions of the statement in the algorithm, which algorithm takes more time than the number of statements executed.
The number of times a statement is executed in an algorithm is called a statement frequency or time frequency . Note as T (n).
2. Complexity of Time:
In the time frequency mentioned just now, N is called the scale of the problem, and when N is constantly changing, the time frequency t (n) will change constantly.
But sometimes we want to know what the pattern is when it changes. To do this, we introduce the concept of time complexity .
In general, the number of times the basic operation is repeated in the algorithm is a function of the problem size n.
denoted by T (n), if there is an auxiliary function f (n), so that when n approaches infinity, the limit value of T (n)/f (n) is a constant that is not equal to zero, then f (n) is the same order of magnitude function of T (N).
As T (n) =o (f (n)), called O (f (n)) is the progressive time complexity of the algorithm, which is referred to as the complexity of time .
The time frequency is different, but the time complexity may be the same. such as: T (n) =n2+3n+4 and T (n) =4n2+2n+1 their frequency is different, but the time complexity is the same, all O (N2). In order of magnitude increments, common time complexity is:
constant Order O (1), Logarithmic order O (log2n), linear order O (n), linear logarithmic order O (nlog2n), square order O (n2), Cubic O (n3),..., K-order O (NK), exponential order O (2n). With the increasing of the problem scale N, the complexity of the time is increasing and the efficiency of the algorithm is less. 3, the worst time complexity and average time complexity: the worst case of time complexity is called the worst time complexity. In general, it is not particularly stated that the time complexity of the discussion is the worst-case time complexity. The reason for this is that the worst-case time complexity is the upper bound of the algorithm's run time on any input instance, which guarantees that the algorithm will not run longer than any other. In the worst case, the time complexity is T (n) =0 (n), which indicates that the algorithm will not run longer than 0 (n) for any input instance. The average time complexity is the expected run time of the algorithm when all possible input instances are present with equal probabilities. Exponential order 0 (2n), obviously, the time complexity of the exponential order 0 (2n) algorithm is very inefficient, when the value of n is slightly larger can not be applied. 4, the time Complexity: "1" If the execution time of the algorithm does not grow with the increase of the problem size n, even if there are thousands of statements in the algorithm, its execution time is only a large constant. The time complexity of such an algorithm is O (1). x=91; y=100;
while (y>0) if (x>100) {x=x-10;y--;} else x + +;
Answer: T (n) =o (1),
This app looks a little scary and runs 1000 times in total, but did we see n?
Didn't.
The operation of this program is independent of N,
Even if it circulates for another 10,000 years, we don't care about him, just a constant order function "2" when there are several loop statements, the time complexity of the algorithm is
byIn a looping statement with the highest number of nested layers
frequency f (n) of the most inner statementDecision-making. X=1; for (i=1;i<=n;i++) for (j=1;j<=i;j++) for (k=1;k<=j;k++) x + +; The most frequently used statement in this program segment is (5), the number of executions in the inner loop is not directly related to the problem size n, but is associated with the variable value of the outer loop, and the number of outermost loops is directly related to n, so the number of executions of the statement (5) can be analyzed from the inner loop to the outer layer: (n) =o (n3/6+ Low) =o (N3)The time complexity of the "3" algorithm depends not only on the scale of the problem, but also on the initial state of the input instance. The algorithm for finding the given value K in the value a[0..n-1] is roughly the following: i=n-1; while (i>=0&& (a[i]!=k)) i--; return i; The frequency of the statement (3) In this algorithm is not only related to the problem size n, it is also related to the value of each element of a in the input instance and the value of K: ① If there is no element equal to K in a, the frequency of the statement (3) f (n) =n;② if the last element of a is equal to K, the frequency f (n) of the statement (3) is the constant 0. 5, Time complexity evaluation performance: There are two algorithms A1 and A2 solve the same problem, the time complexity is T1 (n) =100n2,t2 (n) =5n3 (1) When the input amount n<20, there is T1 (n) >t2 (n), the latter takes less time. (2) with the increase of the problem size n, the time cost of the two algorithms is also increased with the 5N3/100N2=N/20. That is, when the problem scale is large, the algorithm A1 is more effective than the algorithm A2. Their asymptotic time complexity O (n2) and O (N3) Evaluate the temporal quality of the two algorithms on a macroscopic scale. In the algorithm analysis, the time complexity and the asymptotic time complexity of the algorithm are often not distinguished, but the asymptotic time complexity T (n) =o (f (n)) is referred to as the time complexity, and the F (n) is usually the most frequent statement frequency in the algorithm.
second, the complexity of space:The spatial complexity of a program is the amount of memory required to run a program. With the spatial complexity of the program, you can have a pre-estimate of how much memory is needed to run the program. In addition to requiring storage space and storing the instructions, constants, variables, and input data used by the store itself, a program needs some working units to manipulate the data and a secondary space to store some of the information needed for realistic computing. The storage space required for program execution consists of the following two parts. (1)
Fixed part。 The size of this part of the space is independent of the number of input/output data. It mainly includes space occupied by instruction space (i.e. code space), data space (constants, simple variables), etc. This part belongs to the static space. (2)
Variable Space, this part of the space mainly includes the dynamically allocated space, as well as the space required by the recursive stack. The spatial size of this part is related to the algorithm. The storage space required for an algorithm is
f (N)Said.
S (n) =o (f (n) )which
NFor the size of the problem,
S (n)Said
Complexity of Space。 Original reference: http://blog.csdn.net/booirror/article/details/7707551 other references: http://blog.csdn.net/qiantujava/article/details/12898461
Algorithm-time complexity and space complexity