(1) Time frequency
The time it takes for an algorithm to execute is theoretically impossible to figure out and must be tested on the machine. But we can not and do not need to test each algorithm, just know which algorithm spends more time, which algorithm spends less time on it. And the time that an algorithm spends is proportional to the number of executions of the statement in the algorithm, which algorithm takes more time than the number of statements executed. The number of times a statement is executed in an algorithm is called a statement frequency or time frequency. Note as T (N).
(2) Complexity of time
In the time frequency mentioned just now, N is called the scale of the problem, and when N is constantly changing, the time frequency t (n) will change constantly. But sometimes we want to know what the pattern is when it changes. To do this, we introduce the concept of time complexity.
Under normal circumstances, the number of iterations of the basic operation of the algorithm is a function of the problem size n, denoted by T (n), if there is an auxiliary function f (n), so that when n approaches infinity, the limit value of T (n)/f (n) is not equal to zero constant, then f (n) is the same order of magnitude function of t Recorded as T (N) =o (f (n)), called O (f (n))
For the algorithm's progressive time complexity, referred to as the complexity of time.
Second, the common algorithm of time complexity:
O (1): Indicates that the algorithm runs at constant time
O (N): Indicates that the algorithm is a linear algorithm
O (㏒2n): Binary lookup algorithm
O (n2): A variety of simple algorithms for sorting an array, such as an algorithm for inserting a sort directly.
O (n3): do multiplication of two n-order matrices
O (2n): algorithm for all subsets with n elements set
O (n!): An algorithm with n elements for a full permutation
Excellent <---------------------------< inferior
O (1) <o (㏒2n) <o (n) <o (n2) <o (2n)
The time complexity is ascending in order of constant Order O (1), Logarithmic order O (log2n), linear order O (n), linear logarithmic order O (nlog2n), square order O (n2), Cubic O (n3) 、...... K-Order O (NK), exponent-order (2n).
third, calculate time Complexity of the method (calculation example)
Definition: If the scale of a problem is n, the time required for an algorithm to solve this problem is T (n), which is a function of n (n) called the "time complexity" of the algorithm.
When the input n gradually increases, the limit of time complexity is called the asymptotic time complexity of the algorithm.
We often use large o notation to represent time complexity, noting that it is the time complexity of an algorithm. Large O means only that there is an upper bound, by definition if f (n) =o (n), that obviously establishes F (n) =o (n^2), it gives you an upper bound, but not the upper bounds, but people are generally used to express the former.
In addition, a problem itself has its complexity, if the complexity of an algorithm reaches the lower bounds of the complexity of the problem, it is called such an algorithm is the best algorithm.
"Big O notation": The basic parameter used in this description is N, the size of the problem instance, the function of expressing complexity or running time as N. Here the "O" denotes the magnitude (order), for example "binary retrieval is O (Logn)", that is, it needs "to retrieve an array of size n by logn steps" notation O (f (n)) indicates that when n increases, the run time will grow at a rate proportional to f (n).
This kind of progressive estimation is very valuable to the theoretical analysis and the approximate comparison of the algorithm, but in practice the details may also cause differences. For example, a low-cost O (N2) algorithm may run faster than an O (NLOGN) algorithm with a high additional cost in the case of n smaller. Of course, with n large enough, an algorithm with a slower rise function must work faster.
O (1)
Temp=i;i=j;j=temp;
The frequency of the above three individual statements is 1, and the execution time of the program segment is a constant independent of the problem size n. The time complexity of the algorithm is the constant order, which is recorded as T (N) =o (1). If the execution time of the algorithm does not grow with the increase of the problem size n, even if there are thousands of statements in the algorithm, the execution time is only a large constant. The time complexity of such an algorithm is O (1).
O (n^2)
2.1. Exchange of the contents of I and J
sum=0; (once)
For (i=1;i<=n;i++) (N-Times)
For (j=1;j<=n;j++) (n^2 times)
sum++; (n^2 times)
Solution: T (n) =2n^2+n+1 =o (n^2)
2.2.
for (i=1;i<n;i++)
{
y=y+1; ①
For (j=0;j<= (2*n); j + +)
x + +; Ii
}
Solution: The frequency of statement 1 is n-1
The frequency of Statement 2 is (n-1) * (2n+1) =2n^2-n-1
F (n) =2n^2-n-1+ (n-1) =2n^2-2
The program has a time complexity of T (n) =o (n^2).
O (N)
2.3.
a=0;
B=1; ①
for (i=1;i<=n;i++) ②
{
S=a+b; ③
B=a; ④
A=s; ⑤
}
Solution: Frequency of Statement 1:2,
Frequency of statement 2: N,
Frequency of Statement 3: N-1,
Frequency of Statement 4: n-1,
Frequency of Statement 5: N-1,
T (n) =2+n+3 (n-1) =4n-1=o (n).
O (LOG2N)
2.4.
I=1; ①
while (i<=n)
i=i*2; Ii
Solution: The frequency of statement 1 is 1,
The frequency of setting statement 2 is f (n), then: 2^f (N) <=n;f (n) <=log2n
The maximum value f (n) = log2n,
T (n) =o (log2n)
O (n^3)
2.5.
for (i=0;i<n;i++)
{
for (j=0;j<i;j++)
{
for (k=0;k<j;k++)
x=x+2;
}
}
Solution: When the i=m, J=k, the number of times the inner loop is k when I=m, J can take 0,1,..., m-1, so here the most internal cycle of 0+1+...+m-1= (m-1) m/2 times So, I from 0 to N, then the cycle has been carried out: 0+ (1-1) *1/2+ ... + (n-1) n/2=n (n+1) (n-1)/6 So time complexity is O (n^3).
We should also distinguish between the worst-case behavior of the algorithm and the expected behavior. For example, the worst-case run time for fast sorting is O (n^2), but the expected time is O (Nlogn). By carefully selecting the benchmark value each time, it is possible to reduce the probability of the squared condition (i.e. O (n^2)) to almost equal to 0. In practice, a well-implemented quick sort can generally be run at (O (NLOGN) time.
Here are some common notation:
Accessing an element in an array is a constant-time operation, or an O (1) operation. An algorithm that removes half of the data elements in each step, such as a binary search, usually takes O (LOGN) time. Using strcmp to compare two strings with n characters requires O (n) time. The regular matrix multiplication algorithm is O (n^3), as it is calculated that each element needs to multiply n on the elements and add them together, and the number of all elements is n^2.
The exponential time algorithm usually comes from the need to require all possible results. For example, the collection of n elements has a 2n subset, so the algorithm that requires all subsets will be O (2n). The exponential algorithm is generally too complex, unless the value of n is very small, because adding an element to the problem results in a doubling of elapsed time. Unfortunately, there are many problems (such as the famous "tour salesman Problem"), and the algorithms that have been found so far are exponential. If we do encounter this situation, we should usually replace it with an algorithm that looks for the best results.
In various algorithms, if the algorithm is a constant number of execution times, the time complexity is O (1), in addition, the time frequency is not the same, the time complexity may be the same, such as T (n) =n2+3n+4 and T (n) =4n2+2n+1 their frequency is different, but the time complexity of the same, all O ( N2).
In order of magnitude increments, common time complexity is:
Constant order O (1), Logarithmic order O (log2n), linear order O (n),
Linear logarithmic order O (nlog2n), square order O (n2), Cubic O (n3),...,
K-Order O (NK), index order O (2n). With the increasing of the problem scale N, the complexity of the time is increasing and the efficiency of the algorithm is less.
2. Complexity of space
Like time complexity, spatial complexity is the measure of the amount of storage space required for an algorithm to execute within a computer. Recorded as:
S (n) =o (f (n))
What we're talking about is the size of the secondary storage unit in addition to the normal memory overhead.
Complexity of time and space