The number of execution times of a statement in an algorithm is called the frequency of the statement or the frequency of time, as T (N). n is called the scale of the problem, and when N is constantly changing, the time frequency t (n) also changes constantly. But sometimes we want to know what the pattern is when it changes. To do this, we introduce the concept of time complexity.
In general, the number of times the basic operation is repeated in the algorithm is a function of the problem size n, denoted by T (N). If there is an auxiliary function f (n), so that when n approaches infinity, the limit value of T (n)/f (n) is a constant that is not equal to zero, then f (n) is the same order of magnitude function of T (N). As T (n) =o (f (n)), called O (f (n)) is the progressive time complexity of the algorithm.
When the time frequency is different, the progressive time complexity O (f (n)) may be the same, such as T (n) =n^2+3n+4 and T (n) =4n^2+2n+1 their frequency, but the same time complexity, are O (n^2).
Now we summarize the concept of time complexity based on some books and on the Web:
T (n), statement frequency, time frequency, also known as time complexity.
O (f (n)), progressive time complexity.
The former t (N) is the time consuming of an algorithm, which is a function of the size n of the problem solved by the algorithm, while the latter O (f (n)) refers to the order of time complexity of the algorithm when the problem scale tends to infinity. When we evaluate the time performance of an algorithm, the main criterion is the asymptotic time complexity O (f (n)) of the algorithm, so when the algorithm is analyzed, it is often not differentiated between the two, often is the asymptotic time complexity T (n) =o (f (n)) abbreviated as the time complexity, where f (n) It is usually the frequency of the most frequent sentences in the algorithm.
Note: The frequency of the statements in the algorithm is not only related to the problem size, but also to the values of the elements in the input instance. But we always consider the time complexity in the worst case scenario. To ensure that the algorithm does not run longer than it does.
The common time complexity, ascending order by order of magnitude: Constant order O (1), Logarithmic O (log2n) or O (LBN), linear order O (n), linear logarithmic order O (n*log2n), square order O (n^2), Cubic O (n^3), K-Order O (n^k), exponential order O (2^ N).
· Here is a topic that can help students understand the concept of:
1, set three functions f,g,h f (N) =100*n^3+n^2+1000, g (n) =25*n^3+5000*n^2, h (n) =n^1.5+5000*n*lgn
Please determine if the following relationships are true:
(1) F (n) =o (g (n))
(2) G (n) =o (f (n))
(3) H (n) =o (n^1.5)
(4) H (n) =o (NLGN)
Here we review the representation of asymptotic time complexity T (n) =o (f (n)), the "O" here is a mathematical symbol, and its strict definition is "if T (n) and F (n) are two functions defined on the set of positive integers, then T (n) =o (f (n)) indicates the presence of normal numbers C and N0, 0≤t (n) ≤c*f (n) are satisfied when n≥n0. (that is, the definition in the book). The common point is that these two functions when the integer argument n tends to infinity, the ratio is a constant that is not equal to 0.
(1) established. Since the highest of the two functions is n^3, when n→∞, the ratio of two functions is a constant, so the relationship is established.
(2) established. In the same vein.
(3) established. In the same vein.
(4) not established. Since n^1.5 is faster than N*LGN when n→∞, the ratio of H (N) to NLGN is not constant and therefore not valid.
After you understand the concept, you begin to find the time complexity of the algorithm.
From the concept we know that requires time complexity O (f (n)), it is necessary to know the frequency of the most frequent statement frequency f (n), then the maximum statement frequency f (n) must be aware of the algorithm's statement frequency t (n). The general idea is: T (n)->f (n)->o (f (n)). Sometimes you can directly find the most frequently used statements in the algorithm, directly calculate the f (n), and then write O (f (n)). An exception is the difficulty of finding the frequency of the statement T (N)
Here are some examples of detailed instructions: O (1)
Example 1:temp=i;i=j;j=temp;
The frequency of the above three individual statements is 1, and the execution time of the program segment is a constant independent of the problem size n. The time complexity of the algorithm is the constant order, which is recorded as T (N) =o (1). If the execution time of the algorithm does not grow with the increase of the problem size n, even if there are thousands of statements in the algorithm, the execution time is only a large constant. The time complexity of such an algorithm is O (1).
Example 2:x=91; y=100;
while (y>0)
if (x>100)
{x=x-10;y--;}
else x + +;
Answer: T (n) =o (1), this program looks a little scary, a total of 1000 cycles, but we see n no? Didn't. The operation of this program is independent of N, even if it is recycled for 10,000 years, we do not care about him, just a constant order function.
O (N)
Example 1:
I=1; K=0
while (I<n)
{k=k+10*i;i++;
}
Solution: T (n) =n-1, t (n) =o (n), this function is incremented by the linear order.
Example 2:
a=0;
B=1; ①
for (i=1;i<=n;i++) ②
{
S=a+b; ③
B=a; ④
A=s; ⑤
}
Solution: Frequency of Statement 1:2,
Frequency of statement 2: N,
Frequency of Statement 3: N-1,
Frequency of Statement 4: n-1,
Frequency of Statement 5: N-1,
T (n) =2+n+3 (n-1) =4n-1=o (n)
O (n^2)
Example 1: Exchanging the contents of I and J
sum=0; (once)
For (i=1;i<=n;i++) (N-Times)
For (j=1;j<=n;j++) (n^2 times)
sum++; (n^2 times) (This statement is the most frequent in the algorithm, can be calculated directly F (n) =n^2, time complexity t (n) =o (n^2))
Solution: T (n) =2n^2+n+1 =o (n^2)
Example 2:
for (i=1;i<n;i++)
{
y=y+1; ①
For (j=0;j<= (2*n); j + +)
x + +; Ii
}
Solution: The frequency of statement 1 is n-1
The frequency of Statement 2 is (n-1) * (2n+1) =2n^2-n-1
F (n) =2n^2-n-1+ (n-1) =2n^2-2
The program has a time complexity of T (n) =o (n^2).
O (n^3)
Example 1:
for (i=0;i<n;i++)
{
for (j=0;j<i;j++)
{
for (k=0;k<j;k++)
x=x+2; (This statement is the most frequently used statement, calculates the f (n) =n^3, writes out the time complexity T (n) =o (n^3))
}
}
Solution: When the i=m, J=k, the number of times the inner loop is k when I=m, J can take 0,1,..., m-1, so here the most internal cycle of 0+1+...+m-1= (m-1) m/2 times So, I from 0 to N, then the cycle has been carried out: 0+ (1-1) *1/2+ ... + (n-1) n/2=n (n+1) (n-1)/6 So time complexity is O (n^3).
O (log2n) (This is an exception, it is difficult to calculate the T (n), but the technique can be used to directly calculate O (f (n)). )
Example 1:
I=1; ①
while (i<=n)
i=i*2; Ii
Solution: The frequency of statement 1 is 1,
The frequency of setting statement 2 is f (n), then: 2^f (N) <=n;f (n) <=log2n
The maximum value f (n) = log2n,
T (n) =o (log2n)
Has the following complexity relationship
C < log2n < n < n * log2n < n^2 < N^3 < 2^n < 3^n < n!
where c is a constant, if the complexity of an algorithm is C, LOG2N, N, n*log2n, then the algorithm time efficiency is higher, if it is 2^n, 3^n, n!, then a slightly larger n will make this algorithm can not move, in the middle of a few is passable.
1. The algorithm should be (B).
A. Procedure B. Description of the problem solving step c. To meet five basic features D. A and C
"Resolution": The program does not necessarily meet the poor, such as the dead loop, operating system, and the algorithm must be poor. The algorithm represents the description of the problem solving steps, and the program is the specific implementation of the algorithm on the computer.
2. The time complexity of an algorithm is O (N2), indicating the algorithm's (C).
A. The size of the problem is N2 B. Execution Time equals N2
C. Execution time is proportional to N2 D. Problem size proportional to N2
"Parse": The time Complexity is O (N2), the algorithm execution time t (n) <=c* N2 (c is a proportional constant), that is, T (n) =o (N2), time complexity t (n) is a function of the problem size n, the problem size is still n rather than N2.
3. The time complexity of the following algorithms is (D).
void Fun (int n) {
int i=l;
while (i<=n)
i=i*2;
}
A. O (n) b. O (N2) c. O (nlog2n) D. O (log2n)
The "parse" base operation is i=i*2, with its execution time t (n), 2T (n) <=n, or T (n) <=log2n=o (log2n).
4. "2011 Computer Joint examination of real problem"
Set n is a nonnegative integer that describes the size of the problem, and the time complexity of the program fragment below is (A).
x=2;
while (X<N/2)
X=2*x;
A. O (log2n) b. O (n) C. O (nlog2n) D. O (n2)
"Parsing": In the program, the statement with the highest frequency is "x=2*x". Set the statement to perform a total of T-times, then 2T+1=N/2, so T=log2 (N/2) -1=log2n-2, the T (n) =o (log2n).
5. "2012 Computer joint examination of real problem"
The algorithm for finding an integer n (n>=0) factorial is as follows, with a time complexity of (B).
int fact (int n) {
if (n<=l) return 1;
Return N*fact (n-1);
}
A. O (log2n) b. O (n) C. O (nlog2n) D. O (n2)
"Parsing": the subject is the recursive code for factorial n!, that is, N (n-1) *...*1 to perform a common multiplication operation, so T (n) =o (n)
6. The following algorithm has the time complexity of ().
void Fun (int n) {
int i=0;
while (i*i*i<=n)
i++;
}
7. Program section
for (i=n-l;i>l;i--)
for (j=1;j<i;j++)
if (A[j]>a[j+l])
A[J] and a[j+1] swap;
where n is a positive integer, the last line of the statement frequency is in the worst case (D).
A. O (n) b. O (Nlogn) c.o (n3) D. O (n2)
Parse: When all adjacent elements are reversed, the last line of the statement executes each time. At this time
8. The following algorithm adds an underscore statement to the number of executions (A).
int m=0, I, J;
for (i=l;i<=n;i++)
For (j=1;j<=2 * i;j++)
m++;
A.N (n+1) b. N c. n+1 d. n2
"Resolution":
9. The following statement is wrong (B).
Ⅰ. The meaning of the algorithm in-situ is that no additional auxiliary space is required
Ⅱ. Under the same size n, the complexity O (n) algorithm is always better than the complexity O (2n) algorithm in time
Ⅲ. The so-called time complexity is the worst case scenario, an upper bound for estimating the execution time of an algorithm
Ⅳ. The same algorithm, the higher the level of the implementation language, the lower the execution efficiency
A.ⅰb.ⅰ, Ⅱc.ⅰ, Ⅳd.ⅲ
"Parsing": Ⅰ, the algorithm in-situ work refers to the algorithm required by the auxiliary space is a constant. Ⅱ, the problem refers to the time complexity of the algorithm, do not assume that the program (the implementation of the algorithm) of the specific execution time, and give n-special values. An O (n) algorithm with time complexity must always be better than an O (2n) time complexity algorithm. Ⅲ, time complexity always considers the time complexity in the worst case, to ensure that the algorithm runs no longer than it does. Ⅳ is the exact words of Min textbook.
Second, comprehensive application questions
2. Analyze the following program sections to find out the time complexity of the algorithm.
Program Segment ①
i=l;k=0;
while (I<N-L)
{
K=k+10*i;
i++;
}
"Parsing": The base statement is K=k+10*i, a total of n-2 times, so T (n) =o (n).
Program Segment ②
y=0;
while ((y+1) * (y+1) <=n)
y=y+1;
"Parse": Set the loop body to execute t (n) times, each loop once, the cyclic variable y plus 1, the final t (N) =y. therefore (t (n)) 2<=n, the solution is T (n) =o (N1/2).
Program Segment ③
for (i=l;i<=n;i++)
For (J =1;j <=i;j + +)
for (k=l;k<=j;k++)
x + +;
"Resolution":
Program Segment ④
for (i=0;i<n;i++)
for (j=0;j<m;j++)
A[i] [j]=0;
"Parse": a[i][j]=0 is the basic statement, the inner loop executes m times, the outer loop executes n times, a total of m*n times, so T (M, N) =o (m*n) 0
Explanation and practice of time complexity of data structure