Computing of time complexity

Source: Internet
Author: User

From: http://blog.csdn.net/flyyyri/article/details/5154618

 

1. algorithm complexity is divided into time complexity and space complexity.
Purpose: time complexity refers to the duration of Algorithm Execution, while space complexity refers to the size of the storage space required by the algorithm.
2. generally, the number of times the basic operation of an algorithm repeats is a function f (n) of Module N. Therefore, the time complexity of the algorithm is recorded as T (n) = O (f (n ))
Analysis: As Module N increases, the growth rate of Algorithm Execution time is proportional to the growth rate of F (N). Therefore, the smaller F (n), the lower the time complexity of the algorithm, the higher the algorithm efficiency.
3. when calculating the time complexity, first find the basic operation of the algorithm, then determine the number of executions based on the corresponding statements, and then find the T (N) the same order of magnitude (it has the following orders of magnitude: 1, log2n, N, nlog2n, N's Square, N's cubic power, 2's Npower, N !), After finding out, F (n) = this order of magnitude. If T (N)/F (n) calculates the limit and a constant C can be obtained, the time complexity T (N) = O (f (n ))

 

Example: algorithm:

For (I = 1; I <= N; ++ I) {for (j = 1; j <= N; ++ J) {c [I] [J] = 0; // number of basic operations performed by this step: N square times for (k = 1; k <= N; ++ K) c [I] [J] + = A [I] [k] * B [k] [J]; // number of basic operations performed by this step: n to the Power of Three} Then T (n) = n square + n to the power of three, according to the same order of magnitude in the brackets above, we can determine that the cubic power of N is the same order of magnitude of T (N), then there is a cubic power of F (n) = n, and then according to T (N)/F (N) the time complexity of the algorithm is T (n) = O (the cubic power of N) when the limit is obtained) [Switch] computing of algorithm complexityAlgorithm complexity occurs in the first chapter of the Data Structure course. because it involves a little mathematical problems, it is difficult for many students, in addition, this concept is not so specific and makes it difficult for many students to learn. Next we will analyze this question for all the candidates. First, let's take a look at several concepts. One is time complexity, and the other is time complexity. The former is the time consumption of an algorithm, and it is the function of solving the problem scale N. The latter is the order of magnitude of the time complexity of the algorithm when the problem scale tends to be infinite. When evaluating the time performance of an algorithm, the main criterion is the approximate time complexity of the algorithm. Therefore, during algorithm analysis, the two are usually not distinguished, it is often referred to as the time complexity T (n) = O (f (n), where F (n) is generally the statement frequency with the largest intermediate frequency of the algorithm. In addition, the frequency of statements in the algorithm is not only related to the problem scale, but also to the values of each element in the input instance. However, we always consider the time complexity in the worst case. To ensure that the algorithm does not run longer than it. Common time complexity: constant order O (1) {hash table search}, logarithm order o (log2n) {Binary Search}, linear order O (N), linear logarithm order o (nlog2n) {average complexity of fast sorting}, square order O (N ^ 2) {Bubble Sorting}, cubic order O (N ^ 3) {Floyd algorithm for finding the Shortest Path}, K-power order O (N ^ K), exponential order o (2 ^ N) {tower }. The following example shows how to solve the problem.
1. Set the F, G, and H functions to F (n) = 100n ^ 3 + N ^ 2 + 1000, g (n) = 25n ^ 3 + 5000n ^ 2, H (n) = n ^ 1.5 + 5000 nlgn
Determine whether the following link is true:
(1) f (n) = O (G (n ))
(2) g (n) = O (f (n ))
(3) H (n) = O (N ^ 1.5)
(4) H (n) = O (nlgn)
Here we will review the representation of the approximate time complexity T (n) = O (f (N). Here "O" is a mathematical symbol, it is strictly defined as "If T (N) and F (n) are two functions defined on a positive integer set, T (n) = O (f (n )) represents the existence of positive constants c and N0, so that when n is greater than or equal to N0, all meet 0 ≤ T (n) ≤ C? F (n ). "It is easy to understand that when the integer independent variable n tends to be infinite, the ratio of the two functions is a constant number not equal to 0. In this case, it's easy to calculate it. (1) established. In the question, because the maximum times of the two functions are all N ^ 3, when n → ∞, the ratio of the two functions is a constant, so this relationship is true.
(2) established. Same as above.
(3) established. Same as above.
(4) Not true. When n → ∞, N ^ 1.5 is faster than nlgn, the ratio of H (n) To nlgn is not a constant, so it is not true. 2. Set N as a positive integer and use the large "O" mark to represent the execution time of the following program segments as N functions.
(1) I = 1; k = 0
While (I <n)
{K = K + 10 * I; I ++;
}
Answer: T (n) = n-1, T (n) = O (N). This function increases linearly.
(2) x = N; // n> 1
While (x> = (Y + 1) * (Y + 1 ))
Y ++;
Answer: T (n) = N1/2, T (n) = O (N1/2). The worst case is Y = 0, the number of cycles is N1/2. This is a function that increments by the square root order.
(3) x = 91; y = 100;
While (Y> 0)
If (x> 100)
{X = X-10; y --;}
Else x ++;
Q: T (n) = O (1). This program looks a little scary. It runs 1000 times in a loop, but we see N does not? No. The running of this program has nothing to do with N. Even if it has been recycled for 10 thousand years, we don't care about it. It's just a constant order function. Rule: There are the following complexity relationships: C <log2n <n * log2n <n ^ 2 <n ^ 3 <2 ^ n <3 ^ n <n! C is a constant. If the complexity of an algorithm is C, log2n, N, N * log2n, the time efficiency of this algorithm is relatively high. If it is 2 ^ N, 3 ^ N, N !, A slightly larger N will make this algorithm unmovable, and the numbers in the middle will be unsatisfactory. We often need to describe the workload of a specific algorithm relative to n (number of input elements. The time required for retrieval in a group of unordered data is proportional to n. If binary retrieval is used for sorting data, the time spent is proportional to logn. The sorting time may be proportional to N ^ 2 or nlogn. We hope to compare the running time and space requirements of algorithms and make such comparisons irrelevant to complex factors such as programming language, compilation system, machine structure, processor speed, and system load. For this purpose, we have proposed a standard method called the "Big Okee ". in this description, the basic parameter n is used, that is, the scale of the problematic instance, and the complexity or running schedule is set to n. Here, "O" indicates the order level. For example, "binary search is O (logn ", that is to say, it needs to "retrieve an array of N by taking steps of the logn magnitude" note O (f (N) to indicate that when N increases, the running time is at most proportional to the speed of F (n. This kind of progressive estimation is very valuable for the theoretical analysis and general comparison of algorithms, but the details may also cause differences in practice. For example, an O (n2) algorithm with low additional costs may run faster than an O (nlogn) algorithm with high additional costs when n is small. Of course, as N is large enough, algorithms with slow rise functions will certainly work faster. Temp = I; I = J; j = temp; the frequency of the preceding three statements is 1. The execution time of this program segment is a constant irrelevant to the problem scale N. The time complexity of the algorithm is a constant order, which is recorded as T (n) = O (1 ). If the execution time of an algorithm does not increase with the increase of Problem n, even if the algorithm contains thousands of statements, the execution time is just a large constant. The time complexity of such algorithms is O (1 ). Example 2.1. Exchange the content of I and j1) sum = 0; (once)
2) For (I = 1; I <= N; I ++) (N times)
3) For (j = 1; j <= N; j ++) (N ^ 2 times)
4) sum ++; (n ^ 2 times)
Solution: T (n) = 2n ^ 2 + n + 1 = O (N ^ 2) Example 2.2.For (I = 1; I <n; I ++ )... {Y = Y + 1; ① for (j = 0; j <= (2 * n); j ++) x ++; ②} solution: the frequency of Statement 1 is n-1, and the frequency of Statement 2 is (n-1) * (2n + 1) = 2n ^ 2-n-1.
F (n) = 2n ^ 2-n-1 + (n-1) = 2n ^ 2-2, the time complexity of this program t (n) = O (N ^ 2 ). Example 2.3.A = 0; B = 1; ①
For (I = 1; I <= N; I ++) ②
...{
S = a + B; ③
B = A; ④
A = s; ⑤
}
Solution: Statement 1 frequency: 2, Statement 2 Frequency: N, Statement 3 frequency: n-1, Statement 4 frequency: n-1,
Statement 5 frequency: n-1, T (n) = 2 + N + 3 (n-1) = 4n-1 = O (n ). Example 2.4.I = 1; ①
While (I <= N)
I = I * 2; ② solution: the frequency of Statement 1 is 1, and set the frequency of Statement 2 to F (N), then: 2 ^ F (n) <= N; F (n) <= log2n
Taking the maximum value F (n) = log2n, the time complexity of the program t (n) = O (log2n) Example 2.5.For (I = 0; I <n; I ++ )... {for (j = 0; j <I; j ++ )... {for (k = 0; k <j; k ++) x = x + 2 ;}} solution: When I = m, j = K, the number of inner loops is K. When I = m, J can be 0, 1 ,..., m-1 here the inmost cycle is 0 + 1 +... + M-1 = (m-1) m/2 times. Therefore, if I get n from 0, the cycle is: 0 + (1-1) * 1/2 +... + (n-1) n/2 = n (n + 1) (n-1)/6 so the time complexity is O (n ^ 3 ). we should also distinguish the worst-case behavior and expected behavior of algorithms. For example, in the worst case of fast sorting, the running time is O (n ^ 2), but the expected time is O (nlogn ). By carefully selecting the reference value each time, we may reduce the probability of Square (that is, O (N ^ 2) to almost equal to 0. In practice, the well-implemented fast sorting can generally run at (O (nlogn) time. Below are some common notes:The element in the access array is a constant time operation, or an O (1) operation. If an algorithm can remove half of the data elements in each step, such as binary search, it usually takes O (logn) time. It takes O (n) Time to use strcmp to compare two strings with n characters. The general matrix multiplication algorithm is O (n ^ 3), because every element needs to be multiplied and added together, and the number of all elements is n ^ 2. The exponential time algorithm usually comes from finding all possible results. For example, a set of n elements has a total of 2n subsets, so the algorithm that requires all subsets will be O (2n. The exponential algorithm is generally too complex, unless n is very small, because adding an element in this problem will lead to a doubling of running time. Unfortunately, there are indeed a lot of problems (such as the famous "Traveling Salesman Problem"). The algorithms found so far are exponential. If this is the case, we should usually replace it with an algorithm that looks for the best possible results.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.