How to calculate the complexity of the algorithm, including time complexity and space complexity respectively?

Source: Internet
Author: User

I. Complexity of TIME
The  concept time complexity is the one that is most affected by the change of n in the expression of the total number of operations (without coefficients) such as: General total number of operations the expression is similar to this: a*2N+B*N3+C*N2+D*N*LG (n) +e*n+FA! =0 o'clock, the complexity of Time is O (2n); a=0,b<>0=O (n3); b=0,c<>0=O (N2) Example: (1) for(i=1; i<=n;i++)//cycle of n*n times, of course O (n2)             for(j=1; j<=n;j++) s++;(2) for(i=1; i<=n;i++)//cyclic (n+n-1+n-2+...+1) ≈ (n2)/2, because the time complexity is not considered coefficients, so is also O (n2)             for(j=i;j<=n;j++) s++;(3) for(i=1; i<=n;i++)//cyclic (1+2+3+...+n) ≈ (n^2)/2, of course also O (n2)             for(j=1; j<=i;j++) s++;(4) i=1; k=0; // cyclic n1≈n times, so is O (n)           while(i<=n-1) {k+=Ten*I;i++; }(5) for(i=1; i<=n;i++)              for(j=1; j<=i;j++)                  for(k=1; k<=j;k++) x=x+1;//Cycle Out (12+22+32+...+n2) =n (n+1) (2n+1)/6(The formula to remember OH) ≈ (n3)/3, regardless of the coefficient, is naturally o (n3In addition, in the complexity of time, log2n is equivalent to LG (N) (same as LG10 (n)) because of the logarithmic commutation formula:LogAB=logcb/LOGCaSo, log2N=log2Ten*LG (n), ignoring the coefficient, the two of course is equivalent to the second, the calculation method 1The time it takes for an algorithm to execute is theoretically impossible to figure out and must be tested on the machine.
But we can not and do not need to test each algorithm, just know which algorithm spends more time, which algorithm spends less time on it.
And the time that an algorithm spends is proportional to the number of executions of the statement in the algorithm, which algorithm takes more time than the number of statements executed. The number of times a statement is executed in an algorithm is called a statement frequency or time frequency. Note as T (N). 2In general, the number of times the basic operation of the algorithm is repeated is a function f (n) of module N,
Therefore, the time complexity of the algorithm is recorded: T (n) =O (f (n)). As the module n increases, the rate of time the algorithm executes is proportional to the growth rate of f (n),
Therefore, the smaller the F (n), the less time complexity of the algorithm, the higher the efficiency of the algorithm. When calculating the complexity of the time, we first find out the basic operation of the algorithm, then determine its execution times according to the corresponding statements.
Then find the same order of magnitude of T (N) (its same order of magnitude has the following:1, LOG2N, N, nlog2n, n Squared, n three-square, 2 N-Squared, n! ),
When found, f (n) = The order of magnitude, if T (n)/f (n) the limit can be obtained a constant C, then the time complexity t (n) =O (f (n)). 3The common time complexity is increased by order of magnitude, and the common time complexity is: constant order O ( 1), Logarithmic order O (log2n), linear order O (n), linear logarithmic order O (nlog2n), square order O (N2),
Cubic Order O (n3),..., K-order O (NK), index order O (2 N). among them,1. O (n), O (n2), Cubic order O (n3),..., k order O (nk) is a polynomial order time complexity, which is called the first order time complexity, second time complexity .... 2. O (2N), exponential order time complexity, which is not practical3. Logarithmic order O (log2n), Linear logarithmic order O (nlog2n), in addition to the constant order, the most efficient example: the algorithm: for(i=1; i<=n;++i) { for(j=1; j<=n;++j) {c[i] [j]=0;//This step belongs to the number of basic operations performed: N2 for(k=1; k<=n;++k) c[I [j]+=a[I [k]*b[K] [j];//This step belongs to the number of basic operations performed: N3}} There is a T (n)= N2+n3, according to the same order of magnitude in the brackets above, we can determine the N3for the same order of magnitude of T (N) there is f (n)= N3, and then according to T (N)/f (N) to find the limit can be obtained constant c the time complexity of the algorithm: T (n)=o (n3) Four, definition:
If the scale of a problem is n, the time required for an algorithm to solve this problem is T (n), which is a function of n (n) called the "time complexity" of the algorithm. When the input n gradually increases, the limit of time complexity is called the asymptotic time complexity of the algorithm. We often use large o notation to represent time complexity, noting that it is the time complexity of an algorithm. Big O means just saying there's an upper bound, by definition if f (n)=o (n), that clearly establishes F (n) =o (n^2it gives you an upper bound, but not the upper bounds, but people are generally used to expressing the former. In addition, a problem itself has its complexity, if the complexity of an algorithm reaches the lower bounds of the complexity of the problem, it is called such an algorithm is the best algorithm. "Big O notation": The basic parameter used in this description is N, the size of the problem instance, the function of expressing complexity or running time as N. Here the "O" denotes the magnitude (order), for example "binary retrieval is O (Logn)", that is, it needs "to retrieve an array of size n by logn steps" notation O (f (n)) indicates that when n increases, the run time will grow at a rate proportional to f (n). This kind of progressive estimation is very valuable to the theoretical analysis and the approximate comparison of the algorithm, but in practice the details may also cause differences.
For example, a low-cost O (N2) algorithm may run faster than an O (NLOGN) algorithm with a high additional cost in the case of n smaller.
Of course, with n large enough, an algorithm with a slower rise function must work faster. O (1) Temp=i;i=j;j=Temp The frequency of the above three individual statements is 1, and the execution time of the program segment is a constant independent of the problem size n.
The time complexity of the algorithm is the constant order, which is recorded as T (N)=o (1)。 If the execution time of the algorithm does not grow with the increase of the problem size n,
Even though there are thousands of statements in the algorithm, the execution time is only a large constant. The time complexity of such an algorithm is O (1). O (n 2)
1)Exchange of contents of I and J
sum=0; (one time)
for(i=1; i<=n;i++) (n times)
for(j=1; j<=n;j++) (N2 times)
sum++; (n^2 times)
Solution: T (n) =2n^2+n+1=o (n^2)2)
for(i=1; i<n;i++) {
y=y+1; Frequency is N-1
for(j=0; J<= (2*n); j + +)
x + +; Frequency is (n1) * (2n+1) =2n2-n-1
}
F (N) =2n2-N1+ (n1) =2n2-2The program's time complexity T (n) =o (n2). O (n)
q)A=0; b=1; Frequency:2
for(i=1; i<=n;i++)//frequency: N
{
S=a+b; Frequency: N1
B=a; Frequency: N1
C=1; Frequency: N1
}
T (n) =2+n+3(n1) =4n-1=o (n).

O (log2n)
4)
I=1; Frequency is 1.
   while(i<=n)
i=i*2; Frequency is f (n),
The2F (n) <=n;f (n) <=log2n Max f (n) = log2n, T (n) =O (log2n)o (n 3)W) for(i=0; i<n;i++) {
for(j=0; j<i;j++) {
for(k=0; k<j;k++)
x=x+2;
}
}
Solution: When I=m, J=k, the number of inner loop is k when I=m, J can take0,1,..., M-1, so the most internal loop
Made a 0 +1+...+m-1= (M-1) M/2 Times Therefore, I take from 0 to N, then the loop is carried out:0+(1-1)*1/2+...+ (n1) n/2=n (n+1) (n1)/6
So the complexity of Time is O (n3). We should also distinguish between the worst-case behavior of the algorithm and the expected behavior. For example, the worst-case run time for fast sorting is O (n2),
But the expected time is O (NLOGN). By carefully selecting the benchmark value each time, we are likely to put the square case (i.e. O (n2Case
The probability is reduced to almost equal to0. In practice, a well-implemented quick sort can generally be run at (O (NLOGN) time.
Here are some common notation: accessing an element in an array is a constant-time operation, or O (1Operation An algorithm that can remove half of the data elements at each step,
In the case of binary retrieval, it usually takes O (LOGN) time. Using strcmp to compare two strings with n characters requires O (n) time.
The regular matrix multiplication algorithm is O (n3Because each element needs to be multiplied and added together by the N element, the number of all elements is n2.
The exponential time algorithm usually comes from the need to require all possible results. For example, a collection of n elements has a 2n subset, which requires a
The algorithm for all subsets will be O (2n). The exponential algorithm is generally too complex, unless the value of n is very small, because, in this problem
Increases the elapsed time by adding an element to the Unfortunately, there are many problems (such as the famous "tour salesman Problem"),
The algorithms that have been found so far are exponential. If we do encounter this situation, we should usually replace it with an algorithm that looks for the best results.
Two. Complexity of space
Spatial complexity (space complexity) is a measure of the amount of storage space that is temporarily occupied by an algorithm while it is running.
The amount of storage space that an algorithm occupies on the computer's memory,
Includingspace occupied by program code,the space occupied by the input dataAndspace occupied by auxiliary variablesThese three aspects.
The storage space occupied by the input and output data of the algorithm is determined by the problem to be solved, which is passed by the calling function through the parameter table.
It does not vary with this algorithm. Storage algorithm itself occupies the storage space and the algorithm written in proportion to the length, to compress this storage space,
A shorter algorithm must be written. The storage space temporarily occupied by the algorithm varies with the algorithm, and some algorithms only need to occupy a small amount of temporary work units.
And does not change with the size of the problem, we call this algorithm "in-place"is to save the algorithm, such as these introduced a few algorithms are so;
Some algorithms need to occupy the number of temporary working units and solve the problem of the size of N, it increases with the increase of N, when n is larger, will occupy more storage units,
This is the case, for example, in the quick sort and merge sorting algorithm introduced in chapter Nineth. analyzing the storage space occupied by an algorithm should be considered synthetically. For recursive algorithm, generally is relatively short, the algorithm itself occupies less storage space, but the runtime needs an additional stack, thus occupying more temporary work units, if written as a non-recursive algorithm, generally may be longer, the algorithm itself occupies more storage space, but the runtime will probably need less storage units. The spatial complexity of an algorithm only considers the size of the storage space allocated for the local variables during the run, including the storage space allocated for the parametric in the parameter table and the storage space allocated for the local variables defined in the function body two parts. If an algorithm is a recursive algorithm, its spatial complexity is the size of the stack space used by recursion, which is equal to the size of the temporary storage space allocated for a call multiplied by the number of times called (that is, the number of recursive calls plus 1, the 1 table does not start a non-recursive call). The spatial complexity of the algorithm is usually given in order of magnitude. If the spatial complexity of an algorithm is a constant, that is, it is not changed with the size of the amount of n processed data, it can be represented as O (1); When the spatial complexity of an algorithm is proportional to the logarithm of the base N of 2, it can be represented as 0 (log2n), which can be represented as 0 (n) When the space I-division complexity of an algorithm is linearly proportional to n. If the parameter is an array, It is only necessary to assign it a space to store an address pointer transmitted by the argument, that is, a machine word space, or, if the parameter is a reference, you only need to assign it a space to store an address, and use it to store the address of the corresponding argument variable so that the argument variable is automatically referenced by the system. For an algorithm, its time complexity and spatial complexity are often influenced by each other. When the pursuit of a better time complexity, the spatial complexity of the performance may be poor, that may lead to occupy more storage space; Conversely, when=when I seek a better spatial complexity, the performance of time complexity may become worse, which may result in a longer running time. In addition, all the performance of the algorithm has more or less mutual influence. Therefore, when designing an algorithm (especially large algorithm), we should consider the performance of the algorithm, the frequency of use of the algorithm, the size of the data amount processed by the algorithm, the characteristics of the algorithm description language, the machine system environment of the algorithm running, and so on, to design a better algorithm. Spatial complexity is the extra storage space required for the program to run, as well as O () to indicate that the time complexity of the insertion sort is O (n2), the spatial complexity is O (1and the general recursive algorithm will have O (n) space complexity, because each recursive to store the return information of the advantages and disadvantages of an algorithm mainly from the execution time of the algorithm and the storage space required two aspects of measurement, algorithm execution time measurement is not using the algorithm to calculate the absolute time to perform,  Because an algorithm executes on different machines, the time spent is different, at different times also due to the computer resource occupancy, so that the algorithm on the same computer execution time is not the same, so for the time complexity of the algorithm, the implementation of the algorithm by the execution of its basic operation times, known as the calculation of the amount to measure. The execution times of the basic operations in the algorithm are generally related to the problem size, and t (N) is used to represent the number of execution times of the algorithm's basic operation. When evaluating the time complexity of an algorithm, the small difference between the execution times of the two algorithms is not taken into account, but only the essential differences of the algorithms are concerned: Introduction of a so-called O () notation, then T1 (n)=2n=o (n), T2 (n) =n+1=o (n). A function f (n) is O (g (n)), then there must be a normal number of C and M, so that for all n>m, the f (n) <c*g (n) is satisfied.

How does the complexity of the

algorithm include time complexity and spatial complexity calculated separately?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.