Complexity computing in the era of function code: http://hi.baidu.com/dbfr2011818/item/f99fe7df0d65471bd68ed0ee
Definition: if the scale of a problem is N, the time required for an algorithm to solve the problem is T (N). It is a function T (n) of N) this algorithm is called "time complexity ".
The specific steps for solving the time complexity of an algorithm are as follows:
[1] Find the basic statements in the algorithm: the statement with the most executions in the algorithm is the basic statement, usually the loop body of the innermost loop.
[2] The number of times the basic statement is executed: this means that the maximum power of the function for calculating the number of basic statements is correct, the coefficients of all low and maximum power can be ignored. This can simplify algorithm analysis and focus on the most important point: growth rate.
[3] The time performance of the algorithm is indicated by a large attention mark.
If the algorithm contains nested loops, the basic statement is usually the inmost loop body. If the algorithm contains Parallel Loops, the time complexity of the parallel loop is added. For example:
For (I = 1; I <= N; I ++)
X ++;
For (I = 1; I <= N; I ++)
For (j = 1; j <= N; j ++)
X ++;
The time complexity of the first for loop is round (n), and the time complexity of the second for loop is round (N ^ 2 ), the time complexity of the entire algorithm is round (n + N ^ 2) = round (N ^ 2 ).
Binary Search is O (logn), that is to say, it needs to "retrieve an array of N in the order of logn" notation O (f (n )) it indicates that when N increases, the running time is at most proportional to the speed of F (n.
The time complexity of common algorithms ranges from small to large:
Round (1) <round (log2n) <round (n) <round (nlog2n) <round (N2) <round (N3) <... <Weight (2n) <weight (N !)
Explain (1) indicates that the number of executions of basic statements is a constant. Generally, as long as there is no loop statement in the algorithm, the time complexity is explain (1 ). Gini (log2n), entropy (N), entropy (nlog2n), entropy (N2), and entropy (N3) are called polynomial time, while Gini (2n) and entropy (N !) It is called exponential time. Computer Scientists generally think that the former is an effective algorithm, and that such problems are called P problems, while the latter is called NP problems.
O (1): temp = I; I = J; j = temp;
The frequency of the preceding three statements is 1. The execution time of the program segment is a constant irrelevant to the problem scale N. The time complexity of the algorithm is a constant order, which is recorded as T (n) = O (1 ). If the execution time of an algorithm does not increase with the increase of N, even if the algorithm contains thousands of statements, the execution time is just a large constant. The time complexity of such algorithms is O (1 ).
O (N ^ 2)
Exchange Content of I and j
Sum = 0; (once)
For (I = 1; I <= N; I ++) (N times)
For (j = 1; j <= N; j ++) (N ^ 2 times)
Sum ++; (n ^ 2 times)
Solution: T (n) = 2n ^ 2 + n + 1 = O (N ^ 2)
For (I = 1; I <n; I ++)
{
Y = Y + 1; ①
For (j = 0; j <= (2 * n); j ++)
X ++; ②
}
Solution: The frequency of Statement 1 is n-1.
Statement 2 is frequently used (n-1) * (2n + 1) = 2n ^ 2-n-1
F (n) = 2n ^ 2-n-1 + (n-1) = 2n ^ 2-2
The time complexity of this program t (n) = O (N ^ 2 ).
O (N)
A = 0;
B = 1; ①
For (I = 1; I <= N; I ++) ②
{
S = a + B; ③
B = A; ④
A = s; ⑤
}
Solution: Statement 1 frequency: 2,
Statement 2 Frequency: N,
Statement 3 frequency: n-1,
Statement 4 frequency: n-1,
Statement 5 frequency: n-1,
T (n) = 2 + N + 3 (n-1) = 4n-1 = O (n ).
O (log2n)
I = 1; ①
While (I <= N)
I = I * 2; ②
Solution: The frequency of Statement 1 is 1,
If the frequency of Statement 2 is F (N), then: 2 ^ F (n) <= N; F (n) <= log2n
Take the maximum value F (n) = log2n,
T (n) = O (log2n)
O (N ^ 3)
For (I = 0; I <n; I ++)
{
For (j = 0; j <I; j ++)
{
For (k = 0; k <j; k ++)
X = x + 2;
}
}
Solution: When I = m, j = K, the number of inner loops is K. When I = m, J can be 0, 1 ,..., m-1 here the inmost cycle is 0 + 1 +... + M-1 = (m-1) m/2 times. Therefore, if I get n from 0, the cycle is: 0 + (1-1) * 1/2 +... + (n-1) n/2 = n (n + 1) (n-1)/6 so the time complexity is O (n ^ 3 ).
We should also distinguish the worst-case behavior and expected behavior of algorithms. For example, in the worst case of fast sorting, the running time is O (n ^ 2), but the expected time is O (nlogn ). By carefully selecting the reference value each time, we may reduce the probability of Square (that is, O (N ^ 2) to almost equal to 0. In practice, the well-implemented fast sorting can generally run at (O (nlogn) time.
Below are some common notes:
The element in the access array is a constant time operation, or an O (1) operation. If an algorithm can remove half of the data elements in each step, such as binary search, it usually takes O (logn) time. It takes O (n) Time to use strcmp to compare two strings with n characters. The general matrix multiplication algorithm is O (n ^ 3), because every element needs to be multiplied and added together, and the number of all elements is n ^ 2.
The exponential time algorithm usually comes from finding all possible results. For example, a set of n elements has a total of 2n subsets, so the algorithm that requires all subsets will be O (2n. the exponential algorithm is generally too complex, unless n is very small, because adding an element in this problem will lead to a doubling of running time. Unfortunately, there are indeed a lot of problems (such as the famous "Traveling Salesman Problem"). The algorithms found so far are exponential. If this is the case, we should usually replace it with an algorithm that looks for the best possible results.