1. Time Complexity
(1) Time Frequency the time required for executing an algorithm cannot be calculated theoretically. You must run the test on the computer before you can understand it. However, it is impossible and unnecessary for us to test each algorithm on the machine. We only need to know which algorithm takes more time and which algorithm takes less time. In addition, the time spent by an algorithm is proportional to the number of statements executed in the algorithm. In an algorithm, when the number of statements executed is large, it takes more time. The number of statement executions in an algorithm is called the statement frequency or time frequency. As T (n ).
(2) time complexity in the Time Frequency just mentioned, n is called the scale of the problem. When n is constantly changing, the time frequency T (n) will also change. But sometimes we want to know what the rule is when it changes. Therefore, we introduce the concept of time complexity. In general, the number of repeated executions of the basic operation in an algorithm is a function of the problem scale N. It is represented by T (N). If an auxiliary function f (n) exists ), so that when n approaches infinity, the limit value of T (N)/F (n) is a constant not equal to zero, then f (n) is T (N). Recorded as T (n) = O (f (n), called O (f (n ))
It is the progressive time complexity of the algorithm.
In different algorithms, if the number of statement executions in the algorithm is a constant, the time complexity is O (1). In addition, the time complexity may be the same when the time frequency is different, for example, the frequencies of T (n) = n2 + 3N + 4 and T (n) = 4n2 + 2n + 1 are different, but the time complexity is the same, they are all O (n2 ). In ascending order of magnitude, common time Complexities include: constant order O (1), logarithm order o (log2n), linear order O (N), linear logarithm order o (nlog2n ), square order O (n2), cubic order o (N3 ),...,
K to the power of O (NK), exponential order o (2n ). As the problem scale N increases, the time complexity increases and the algorithm execution efficiency decreases. 2. The space complexity is similar to the time complexity. The space complexity refers to the measurement of the storage space required for an algorithm to be executed in a computer. Note: S (n) = O (f (N) We generally discuss the scale of auxiliary storage units except for normal memory usage. The discussion method is similar to the time complexity.
(3) Evaluation of progressive time complexity the time performance of an algorithm mainly evaluates the time performance of an algorithm by the magnitude of the algorithm's time complexity (that is, the approximate time complexity of the algorithm.
[Example 3.7] There are two algorithms A1 and A2 to solve the same problem. The time complexity is T1 (n) = 100n2, T2 (n) = 5n3, respectively.
(1) When the input value is n <20, there is T1 (n)> T2 (n), and the latter takes less time.
(2) As the problem scale N increases, the time overhead ratio of the two algorithms increases by 5n3/100n2 = N/20. That is, when the problem scale is large, algorithm A1 is more effective than algorithm A2. Their approximate time complexity O (n2) and O (N3) evaluate the quality of these two algorithms in terms of time. In algorithm analysis, the time complexity and the time complexity of the algorithm are not differentiated, but the time complexity T (n) = O (f (N) is often the same )) for short, it is time complexity. F (n) is generally the statement frequency with the highest intermediate frequency of the algorithm.
[Example 3.8] the time complexity of matrixmultiply is T (n) = O (N3), and F (n) = N3 is the frequency of statements (5) in the algorithm. The following example illustrates how to calculate the time complexity of an algorithm.
[Example 3.9] exchange the content of I and J.
Temp = I; I = J; j = temp; the frequency of the preceding three statements is 1. The execution time of this program segment is a constant irrelevant to the problem scale N. The time complexity of the algorithm is a constant order, which is recorded as T (n) = O (1 ). If the execution time of an algorithm does not increase with the increase of N, even if the algorithm contains thousands of statements, the execution time is just a large constant. The time complexity of such algorithms is O (1 ).
[Example 3.10] one variable count.
(1) x = 0; y = 0;
For (K-1; k <= N; k ++)
(3) x ++;
(4) For (I = 1; I <= N; I ++)
(5) For (j = 1; j <= N; j ++)
(6) y ++;
Generally, you only need to consider the number of statements executed in the loop body for a step-by-step statement, ignoring the step-by-step addition 1, final value identification, transfer control, and other components in the statement. Therefore, the statement with the largest intermediate frequency in the above section is (6), and its frequency is F (n) = n2. Therefore, the time complexity of this section is T (n) = O (n2 ). When there are several loop statements, the time complexity of the algorithm is determined by the frequency f (n) of the innermost statement in the loop statement with the most nested layers.
[Example 3.11] variable count 2.
(1) x = 1;
(2) For (I = 1; I <= N; I ++)
(3) For (j = 1; j <= I; j ++)
(4) For (k = 1; k <= J; k ++)
(5) x ++;
The statement with the largest intermediate frequency in this program segment is (5). Although the number of executions of the inner loop is not directly related to the problem scale N, it is related to the variable values of the outer loop, the number of times of the outermost loop is directly related to N. Therefore, the execution times of the statement (5) can be analyzed from the inner loop to the outer loop: the time complexity of the program segment is T (n) = O (N3/6 + low items) = O (N3 ). (4) the time complexity of the algorithm depends not only on the scale of the problem, but also on the initial status of the input instance.
[Example 3.12] the algorithm for finding the given value K in the value a [0 .. n-1] is roughly as follows:
(1) I = n-1;
(2) While (I> = 0 & (A [I]! = K ))
(3) I --;
(4) return I;
The frequency of the statement (3) In this algorithm is not only related to the problem scale N, but also to the values of each element of A and K in the input instance: ① if A has no element equal to K, the frequency of statement (3) f (n) = N; ② If the last element of A is equal to K, the statement (3) the frequency f (n) is constant 0. (5) Worst-case time complexity and average time complexity. Worst-case time complexity is the worst time complexity. In general, the time complexity discussed is the worst case.
The cause is that the worst case time complexity is the upper limit of the running time of the algorithm on any input instance, which ensures that the running time of the algorithm is not longer than that of any other input instance.
[Example 3.19] Search Algorithm
[Example 1-8] in the worst case, the time complexity is T (n) = 0 (n), which indicates that for any input instance, the running time of this algorithm cannot be greater than 0 (n ).
Average time complexity refers to the expected running time of the algorithm when all possible input instances are equal probability.
Common time complexity is sequentially arranged by order of magnitude: constant 0 (1), logarithm order 0 (log2n), linear order 0 (N), linear logarithm order 0 (nlog2n), 0 (N2), 0 (N3 ),... K to the power of 0 (NK), exponential order 0 (2n ). Obviously, algorithms with time complexity of 0 (2n) of exponential order are extremely inefficient and cannot be used when n values are slightly larger.
2. Similar to the discussion of time complexity, the space complexity (space complexity) S (n) of an algorithm is defined as the storage space consumed by the algorithm. It is also a function of the problem scale N. The Approximate Spatial complexity is often referred to as spatial complexity.
Space complexity is a measure of the size of storage space temporarily occupied by an algorithm during operation. The storage space occupied by an algorithm in computer memory, including the storage space occupied by the storage algorithm itself, the storage space occupied by the input and output data of the algorithm and the storage space temporarily occupied by the algorithm during running. The storage space occupied by the input and output data of an algorithm is determined by the problem to be solved. It is transmitted by calling a function through a parameter table, which does not change with the algorithm. The storage space occupied by the storage algorithm itself is proportional to the length of the algorithm writing. to compress the storage space, you must compile a short algorithm. The storage space temporarily occupied by an algorithm varies with the algorithm. Some algorithms only occupy a small amount of temporary work units and do not change with the size of the problem, we call this algorithm "in-place \" and save storage, as is described in this section; the number of temporary work orders that some algorithms need to occupy is related to the scale of Solving the Problem n. It increases with N. When n is large, it will occupy a large number of storage units, for example, the quick sort and merge sort algorithms described in Chapter 9 belong to this situation.
The storage space occupied by an algorithm needs to be analyzed comprehensively from various aspects. For example, recursive algorithms are generally short, and the algorithms occupy less storage space. However, an additional stack is required during the runtime to occupy a large number of temporary work units; if you write a non-recursive algorithm, it may take a long time. The algorithm occupies a large amount of storage space, but it may require a small number of storage units at runtime.
The space complexity of an algorithm only takes into account the size of the storage space allocated for local variables during running, it includes two parts: the bucket allocated for the parameter table parameters and the bucket allocated for the local variables defined in the function body. If an algorithm is a recursive algorithm, its space complexity is the size of the stack space used by recursion, it is equal to the size of the temporary bucket allocated for a call multiplied by the number of calls (that is, the number of recursive calls plus 1, this table 1 does not start a non-recursive call ). The space complexity of an algorithm is generally given in order of magnitude. For example, if the spatial complexity of an algorithm is a constant that does not change with the size of N, it can be expressed as O (1 ); when the spatial complexity of an algorithm is proportional to the logarithm of N at the bottom of 2, it can be expressed as 0 (10g2n ); when the complexity of an algorithm's null I division is linearly proportional to N, it can be expressed as 0 (n ). if the parameter is an array, you only need to allocate a space for storing an address pointer transmitted by the real parameter, that is, a machine word length space. If the parameter is a reference method, you only need to allocate an address space for it to store the address corresponding to the real variable, so that the system can automatically reference the real variable.
For an algorithm, its time complexity and space complexity often affect each other. When pursuing a better time complexity, the performance of the space complexity may deteriorate, that is, it may lead to a large amount of storage space. Otherwise, when I is used to calculate a better spatial complexity, the performance of the time complexity may be deteriorated, that is, it may lead to a long running time. In addition, all the performance of an algorithm has more or less mutual influence. Therefore, when designing an algorithm (especially a large algorithm), we must consider the performance of the algorithm, the frequency of use of the algorithm, the size of the data processed by the algorithm, and the characteristics of the Algorithm Description Language, A good algorithm can be designed only by factors such as the machine system environment where the algorithm runs.
The time complexity and space complexity of an algorithm are collectively referred to as the complexity of an algorithm.
------------------------------------------------
O (1)
Temp = I; I = J; j = temp;
The frequency of the preceding three statements is 1. The execution time of the program segment is a constant irrelevant to the problem scale N. The time complexity of the algorithm is a constant order, which is recorded as T (n) = O (1 ). If the execution time of an algorithm does not increase with the increase of Problem n, even if the algorithm contains thousands of statements, the execution time is just a large constant. The time complexity of such algorithms is O (1 ).
O (N ^ 2)
2.1. Exchange Content of I and j
Sum = 0; (once)
For (I = 1; I <= N; I ++) (N times)
For (j = 1; j <= N; j ++) (N ^ 2 times)
Sum ++; (n ^ 2 times)
Solution: T (n) = 2n ^ 2 + n + 1 = O (N ^ 2)
2.2.
For (I = 1; I <n; I ++)
{
Y = Y + 1; ①
For (j = 0; j <= (2 * n); j ++)
X ++; ②
}
Solution: The frequency of Statement 1 is n-1.
Statement 2 is frequently used (n-1) * (2n + 1) = 2n ^ 2-n-1
F (n) = 2n ^ 2-n-1 + (n-1) = 2n ^ 2-2
The time complexity of this program t (n) = O (N ^ 2 ).
O (N)
2.3.
A = 0;
B = 1; ①
For (I = 1; I <= N; I ++) ②
{
S = a + B; ③
B = A; ④
A = s; ⑤
}
Solution: Statement 1 frequency: 2,
Statement 2 Frequency: N,
Statement 3 frequency: n-1,
Statement 4 frequency: n-1,
Statement 5 frequency: n-1,
T (n) = 2 + N + 3 (n-1) = 4n-1 = O (n ).
O (log2n)
2.4.
I = 1; ①
While (I <= N)
I = I * 2; ②
Solution: The frequency of Statement 1 is 1,
If the frequency of Statement 2 is F (N), then: 2 ^ F (n) <= N; F (n) <= log2n
Take the maximum value F (n) = log2n,
T (n) = O (log2n)
O (N ^ 3)
2.5.
For (I = 0; I <n; I ++)
{
For (j = 0; j <I; j ++)
{
For (k = 0; k <j; k ++)
X = x + 2;
}
}
Solution: When I = m, j = K, the number of inner loops is K. When I = m, J can be 0, 1 ,..., m-1 here the inmost cycle is 0 + 1 +... + M-1 = (m-1) m/2 times. Therefore, if I get n from 0, the cycle is: 0 + (1-1) * 1/2 +... + (n-1) n/2 = n (n + 1) (n-1)/6 so the time complexity is O (n ^ 3 ).
We should also distinguish the worst-case behavior and expected behavior of algorithms. For example, in the worst case of fast sorting, the running time is O (n ^ 2), but the expected time is O (nlogn ). By carefully selecting the reference value each time, we may reduce the probability of Square (that is, O (N ^ 2) to almost equal to 0. In practice, the well-implemented fast sorting can generally run at (O (nlogn) time.
Below are some common notes:
The element in the access array is a constant time operation, or an O (1) operation. If an algorithm removes half of the data elements in each step, such as binary search, it usually takes the O (logn) time. It takes O (n) Time to use strcmp to compare two strings with n characters. The general matrix multiplication algorithm is O (n ^ 3), because every element needs to be multiplied and added together, and the number of all elements is n ^ 2.
The exponential time algorithm usually comes from finding all possible results. For example, a set of n elements has a total of 2n subsets, so the algorithm that requires all subsets will be O (2n. The exponential algorithm is generally too complex, unless n is very small, because adding an element in this problem will lead to a doubling of running time. Unfortunately, there are indeed a lot of problems (such as the famous "Traveling Salesman Problem"). The algorithms found so far are exponential. If this happens,
Generally, we should use an algorithm to find the approximate best result.