Time Complexity: if the scale of a problem is N, the time required for an algorithm to solve the problem is T (N). It is a function of N, T (n) is called the "time complexity" of this algorithm ". Gradual time complexity: when the input N increases gradually, the limit of time complexity is called the "gradual time complexity" of the algorithm ". When evaluating the time performance of an algorithm, the main criterion is the approximate time complexity of the algorithm. Therefore, during algorithm analysis, the two are usually not distinguished, it is often referred to as the time complexity T (n) = O (f (n), where F (n) is generally the statement frequency with the largest intermediate frequency of the algorithm. In addition, the frequency of statements in the algorithm is not only related to the problem scale, but also to the values of each element in the input instance. However, we always consider the time complexity in the worst case. To ensure that the algorithm does not run longer than it. Common time complexity: constant order O (1), logarithm order o (log2n), linear order O (N), linear logarithm order o (nlog2n), square order O (N ^ 2), cubic order O (N ^ 3), k power order O (N ^ K), exponential order o (2 ^ N ). The following example shows how to solve the problem. 1. Set the F, G, and H functions to F (n) = 100n ^ 3 + N ^ 2 + 1000, g (n) = 25n ^ 3 + 5000n ^ 2, H (n) = n ^ 1.5 + 5000 nlgn determine whether the following link is true: (1) f (N) = O (G (N) (2) g (n) = O (f (N) (3) H (n) = O (N ^ 1.5) (4) H (n) = O (nlgn), let's review the representation of the approximate time complexity T (n) = O (f (n )), here, "O" is a mathematical symbol, which is strictly defined as "If T (N) and F (n) are two functions defined on a positive integer set, T (N) = O (f (N) indicates that there are positive constants c and N0, so that when n is greater than or equal to N0, the conditions are 0 ≤ T (n) ≤ C? F (n ). "It is easy to understand that when the integer independent variable n tends to be infinite, the ratio of the two functions is a constant number not equal to 0. In this case, it's easy to calculate it. ◆ (1) was established. In the question, because the maximum times of the two functions are all N ^ 3, when n → ∞, the ratio of the two functions is a constant, so this relationship is true. ◆ (2) was established. Same as above. ◆ (3) was established. Same as above. ◆ (4) not true. When n → ∞, N ^ 1.5 is faster than nlgn, the ratio of H (n) To nlgn is not a constant, so it is not true. 2. Set N as a positive integer and use the large "O" mark to represent the execution time of the following program segments as N functions. (1) I = 1; k = 0 while (I <n) {k = K + 10 * I; I ++;} answer: T (n) = n-1, T (n) = O (N). This function increments linearly. (2) x = N; // n> 1 while (x> = (Y + 1) * (Y + 1) y ++; answer: T (N) = N1/2, T (n) = O (N1/2), the worst case is y = 0, then the number of cycles is N1/2, this is a function that increments by the square root. (3) x = 91; y = 100; while (Y> 0) if (x> 100) {x = X-10; y --;} else x ++: T (n) = O (1), this program looks a little scary. It runs 1000 times in a loop, but we see N does not? No. The running of this program has nothing to do with N. Even if it has been recycled for 10 thousand years, we don't care about it. It's just a constant order function. 3. constant order O (1) temp = I; I = J; j = temp; the frequency of the preceding three statements is 1, the execution time of this program segment is a constant irrelevant to the problem scale N. The time complexity of the algorithm is a constant order, which is recorded as T (n) = O (1 ). If the execution time of an algorithm does not increase with the increase of Problem n, even if the algorithm contains thousands of statements, the execution time is just a large constant. The time complexity of such algorithms is O (1 ). 4. square order O (N ^ 2) (1) exchange the content of I and j sum = 0; (once) for (I = 1; I <= N; I ++) (N times) for (j = 1; j <= N; j ++) (N ^ 2 times) sum ++; (n ^ 2 times) solution: T (n) = 2n ^ 2 + n + 1 = O (N ^ 2) (2) for (I = 1; I <n; I ++) {Y = Y + 1; ① for (j = 0; j <= (2 * n); j ++) x ++; ②} solution: the frequency of Statement 1 is n-1. The frequency of Statement 2 is (n-1) * (2n + 1) = 2n ^ 2-n-1 F (n) = 2n ^ 2-n-1 + (n-1) = 2n ^ 2-2 the time complexity of the program t (n) = O (N ^ 2 ). 5. linear order O (N) (1) A = 0; B = 1; ① for (I = 2; I <= N; I ++) ② {S = a + B; ③ B = A; ④ a = s; ⑤} solution: Frequency of Statement 1: 2; frequency of Statement 2: N, statement 3 frequency: n-1, Statement 4 frequency: N -1: The frequency of Statement 5: n-1, T (n) = 2 + N + 3 (n-1) = 4n-1 = O (n ). 6. linear logarithm order o (log2n) (1) I = 1; ① while (I <= N) I = I * 2; ② solution: the frequency of Statement 1 is 1, if the frequency of Statement 2 is F (N), then: 2 ^ F (n) <= N; F (n) <= log2n takes the maximum value F (n) = log2n, T (n) = O (log2n) O (N ^ 3) (2) for (I = 0; I <n; I ++) {for (j = 0; j <I; j ++) {for (k = 0; k <j; k ++) x = x + 2 ;}} solution: When I = m, when J = K, the number of inner loops is K. When I = m, J can be 0, 1 ,..., m-1 here the inmost cycle is 0 + 1 +... + M-1 = (m-1) m/2 times. Therefore, if I get n from 0, the cycle is: 0 + (1-1) * 1/2 +... + (n-1) n/2 = n (n + 1) (n-1)/6 so time complexity It is O (n ^ 3). We should also distinguish the worst-case behavior and expected behavior of the algorithm. For example, in the worst case of fast sorting, the running time is O (n ^ 2), but the expected time is O (nlogn ). By carefully selecting the reference value each time, we may reduce the probability of Square (that is, O (N ^ 2) to almost equal to 0. In practice, the well-implemented fast sorting can generally run at (O (nlogn) time. The following are some common recording methods: (1) the elements in the access array are constant time operations, or O (1) operations. (2) If an algorithm removes half of the data elements in each step, for example, binary search, it usually takes the O (logn) time. (3) It takes O (n) Time to use strcmp to compare two strings with n characters. (4) The general matrix multiplication algorithm is O (n ^ 3), because each element needs to be multiplied and added together, and the number of all elements is n ^ 2. (5) The exponential time algorithm generally comes from finding all possible results. For example, a set of n elements has a total of 2n subsets, so the algorithm that requires all subsets will be O (2n. The exponential algorithm is generally too complex, unless n is very small, because adding an element in this problem will lead to a doubling of running time. Unfortunately, there are indeed a lot of problems (such as the famous "Traveling Salesman Problem"). The algorithms found so far are exponential. If this is the case, we should usually replace it with an algorithm that looks for the best possible results. (6) empirical rule c <log2n <n * log2n <n ^ 2 <n ^ 3 <2 ^ n <3 ^ n <n! C is a constant. If the complexity of an algorithm is C, log2n, N, N * log2n, the time efficiency of this algorithm is relatively high. If it is 2 ^ N, 3 ^ N, N !, A slightly larger N will make this algorithm unmovable, and the numbers in the middle will be unsatisfactory.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.