Time complexity and space complexity of computer

Source: Internet
Author: User

First look at a few concepts. One is the complexity of time, and the other is the complexity of the asymptotic time. The former is the time consuming of an algorithm, which is the function of solving the problem size n, while the latter refers to the order of time complexity of the algorithm when the problem scale tends to infinity.

When we evaluate the time performance of an algorithm, the main criterion is the asymptotic time complexity of the algorithm, therefore, in the algorithm analysis, often do not distinguish between the two, often is the asymptotic time complexity T (n) =o (f (n)) abbreviation for the time complexity, where f (n) is generally the frequency of the most frequently used in the algorithm.

In addition, the frequency of the statements in the algorithm is not only related to the problem size, but also to the value of each element in the input instance. But we always consider the time complexity in the worst case scenario. To ensure that the algorithm does not run longer than it does.

The common time complexity, ascending by order of magnitude is: constant order O (1), Logarithmic order O (log2n), linear order O (n), linear logarithmic order O (nlog2n), square order O (n^2), cubic Order O (n^3), K-Order O (n^k), exponent order O (2^n).

1. Large O-notation


Defined


The time complexity of setting up a program is represented by a function T (n), for a lookup algorithm, as follows:

int seqsearch (int a[], const int n, const int x)
{
int i = 0;
for (; A[i]! = x && i < n; i++);
if (i = = n) return-1;
else return i;
}
This program compares the input values sequentially to the elements in the array to find the elements that are equal to each other.

When the first element is found it needs to be compared once, in the second element to find a need to compare 2 times, ..., in the nth element to find the need to compare n times. For an array with n elements, if the probability of each element being found is equal, the average number of comparisons for a successful lookup is:

F (n) = 1/n (n + (n-1) + (n-2) + ... + 1) = (n+1)/2 = O (n)

This is the original definition of the legendary big O function.

Use Big O to express
To fully analyze an algorithm, you need to consider the time cost of the algorithm in the worst and best case, and the time cost in the average case. For the worst case scenario, a general reference to the large O notation (note that the "general formulation" is used here) is: when and only if there are positive integers C and n0, so that T (n) <= c*f (n) is set for all n >= n0. The asymptotic time complexity of the algorithm is called t (n) = O (f (n)). This should be the first chapter in advanced mathematics, the limit inside the knowledge. Here f (n) = (n+1)/2, then C * F (n) is a once function. It is on the image that if the function is below C*f (n), it is the complexity of t (n) = O (f (n)).

For the number of levels, we can use the Big O notation as O (log2n).


Rules
1) Addition rule
T (n,m) = T1 (n) + T2 (n) = O (max (f (n), G (m))
2) Multiplication Rules
T (n,m) = T1 (n) * T2 (m) = O (f (N) * g (M))
3) A special case
There is a special case in large O notation, if T1 (n) = O©, c is an arbitrary constant unrelated to N, and T2 (n) = O (f (n)) has
T (n) = T1 (n) * T2 (n) = O (c*f (n)) = O (f (n)).

In other words, in large O notation, any non-0 normal number is the same order of magnitude, denoted by O (1).
4) A rule of thumb
Has the following complexity relationship
C < log2n < n < n * log2n < n^2 < N^3 < 2^n < 3^n < n!
where c is a constant, if the complexity of an algorithm is C, LOG2N, N, n*log2n, then the algorithm time efficiency is higher, if it is 2^n, 3^n, n!, then a slightly larger n will make this algorithm can not move, in the middle of a few is passable.

1) Basic knowledge points: The complexity of a program without loops is constant, the complexity of a layer of loops is O (n), and the complexity of the two-layer loop is O (n^2)? (I use ^2 to denote the square, the same ^3 represents the cubic);

2) standard deviation of two-dimensional matrices, residuals, information entropy, time complexity of fft2,dwt2,dct2: standard deviation and residuals may O (n), FFT2 is O (Nlog (n)), DWT2 may also be O (Nlog (n)), information entropy requires probability, and DCT process is the same as JPEG. As with JPEG, the two difficult matrices were processed. Y=t*x*t ', z=y.*mask, this way, there are also divided into 8*8 sub-image;

3) Example:

1, set three functions f,g,h f (N) =100n^3+n^2+1000, g (n) =25n^3+5000n^2, h (n) =n^1.5+5000nlgn

Please determine if the following relationships are true:

(1) F (n) =o (g (n))

(2) G (n) =o (f (n))

(3) H (n) =o (n^1.5)

(4) H (n) =o (NLGN)

Here we review the representation of asymptotic time complexity T (n) =o (f (n)), the "O" here is a mathematical symbol, and its strict definition is "if T (n) and F (n) are the two functions defined on the set of positive integers, then T (n) =o (f (n)) denotes the existence of positive constants C and N0, 0≤t (n) ≤c?f (n) are satisfied when n≥n0. "In the easy-to-understand words, these two functions are a constant that is not equal to 0 when the integer argument n tends to infinity." In this way, it's a good calculation.

(1) established. Since the highest of the two functions is n^3, when n→∞, the ratio of two functions is a constant, so the relationship is established.

(2) established. In the same vein.

(3) established. In the same vein.

(4) not established. Since n^1.5 is faster than NLGN when n→∞, the ratio of H (N) to NLGN is not constant and therefore not valid.

2, set n is a positive integer, using the large "O" notation, the following program segment execution time is represented as a function of N.

(1) I=1; K=0

while (I<n)

{k=k+10*i;i++;

}

Solution: T (n) =n-1, t (n) =o (n), this function is incremented by the linear order.

(2) X=n; N>1

while (x>= (y+1) * (y+1))

y++;

Solution: T (n) =n1/2, t (n) =o (N1/2), the worst case is y=0, then the number of cycles is N1/2 times, which is a function that increments by the square root order.

(3) x=91; y=100;

while (y>0)

if (x>100)

{x=x-10;y--;}

else x + +;

Answer: T (n) =o (1), this program looks a little scary, a total of 1000 cycles, but we see n no? Didn't. The operation of this program is independent of N, even if it is recycled for 10,000 years, we do not care about him, just a constant order function.

The same problem can be solved by different algorithms, and the quality of an algorithm will affect the efficiency of the algorithm and even the program. The purpose of the algorithm analysis is to select the suitable algorithm and the improved algorithm. The evaluation of an algorithm is mainly considered in terms of time complexity and spatial complexity.

1. Complexity of Time

(1) Time frequency
The time it takes for an algorithm to execute is theoretically impossible to figure out and must be tested on the machine. But we can not and do not need to test each algorithm, just know which algorithm spends more time, which algorithm spends less time on it. And the time that an algorithm spends is proportional to the number of executions of the statement in the algorithm, which algorithm takes more time than the number of statements executed. The number of times a statement is executed in an algorithm is called a statement frequency or time frequency. Note as T (N).

(2) Complexity of time
In the time frequency mentioned just now, N is called the scale of the problem, and when N is constantly changing, the time frequency t (n) will change constantly. But sometimes we want to know what the pattern is when it changes. To do this, we introduce the concept of time complexity.

Under normal circumstances, the number of iterations of the basic operation of the algorithm is a function of the problem size n, denoted by T (n), if there is an auxiliary function f (n), so that when n approaches infinity, the limit value of T (n)/f (n) is not equal to zero constant, then f (n) is the same order of magnitude function of t As T (n) =o (f (n)), called O (f (n)) is the progressive time complexity of the algorithm, which is referred to as the complexity of time.

In various algorithms, if the algorithm is a constant number of execution times, the time complexity is O (1), in addition, the time frequency is not the same, the time complexity may be the same, such as T (n) =n2+3n+4 and T (n) =4n2+2n+1 their frequency is different, but the time complexity of the same, all O ( N2).

In order of magnitude increments, common time complexity is:
Constant order O (1), Logarithmic order O (log2n), linear order O (n),
Linear logarithmic order O (nlog2n), square order O (n2), Cubic O (n3),...,
K-Order O (NK), index order O (2n). With the increasing of the problem scale N, the complexity of the time is increasing and the efficiency of the algorithm is less.

2. Complexity of space
Like time complexity, spatial complexity is the measure of the amount of storage space required for an algorithm to execute within a computer. Recorded as:
S (n) =o (f (n))

We are generally talking about the size of the secondary storage unit in addition to the normal memory overhead. The discussion method is similar to the complexity of time, and is not discussed.

(3) Time performance of progressive time complexity evaluation algorithm
The time performance of an algorithm is evaluated mainly by the order of magnitude of the algorithm's time complexity (i.e., the asymptotic time complexity of the algorithm).
"Example 3." 7 "There are two algorithms A1 and A2 solve the same problem, the time complexity is T1 (n) =100n2,t2 (n) =5n3.
(1) When the input amount is n<20, there is T1 (n) >t2 (n), which takes less time.
(2) with the increase of the problem size n, the time cost of the two algorithms is also increased with the 5N3/100N2=N/20. That is, when the problem scale is large, the algorithm A1 is more effective than the algorithm A2.
Their asymptotic time complexity O (n2) and O (N3) Evaluate the temporal quality of the two algorithms on a macroscopic scale. In the algorithm analysis, the time complexity and the asymptotic time complexity of the algorithm are often not distinguished, but the asymptotic time complexity T (n) =o (f (n)) is referred to as the time complexity, and the F (n) is usually the most frequent statement frequency in the algorithm.
"Example 3." The time complexity of the 8 "algorithm matrixmultiply is generally t (n) =o (N3), and F (n) =n3 is the frequency of the statement (5) in the algorithm. The following example shows how to find the time complexity of the algorithm.

"Example 3." 9 "Swap the contents of I and J.
Temp=i;
I=j;
J=temp;

The frequency of the above three individual statements is 1, and the execution time of the program segment is a constant independent of the problem size n. The time complexity of the algorithm is the constant order, which is recorded as T (N) =o (1).
If the execution time of the algorithm does not grow with the increase of the problem size n, even if there are thousands of statements in the algorithm, the execution time is only a large constant. The time complexity of such an algorithm is O (1).

"Example 3." One of the 10 "variable counts.
(1) x=0;y=0;
(2) for (k-1;k<=n;k++)
(3) x + +;
(4) for (i=1;i<=n;i++)
(5) for (j=1;j<=n;j++)
(6) y++;

In general, the step Loop statement only takes into account the number of executions of the statement in the loop body, ignoring the step plus 1, the final value discriminant, the control transfer and other components of the statement. Therefore, the most frequent statement in the above program segment is (6), the frequency is f (n) =n2, so the time complexity of the program segment is t (n) =o (n2).
When there are several loop statements, the time complexity of the algorithm is determined by the frequency f (n) of the most inner statement in the loop statement with the highest number of nesting layers.

"Example 3." 11 "Variable count of two.
(1) X=1;
(2) for (i=1;i<=n;i++)
(3) for (j=1;j<=i;j++)
(4) for (k=1;k<=j;k++)
(5) x + +;

The most frequently used statement in this program segment is (5), the number of executions in the inner loop is not directly related to the problem size n, but is associated with the variable value of the outer loop, and the number of outermost loops is directly related to n, so the number of executions of the statement (5) can be analyzed from the inner loop to the outer layer:

The time complexity of the program segment is t (n) =o (n3/6+ Low) =o (N3).

(4) The time complexity of the algorithm depends not only on the scale of the problem, but also on the initial state of the input instance.
"Example 3." 12 The algorithm for finding the given value K in the value a[0..n-1 is roughly as follows:
(1) i=n-1;
(2) while (i>=0&& (a[i]!=k))
(3) i--;
(4) return i;

The frequency of the statement (3) In this algorithm is not only related to the problem size n, but also to the value of each element of a in the input instance and the value of K:
① If there is no element equal to K in a, the frequency f (n) of the statement (3) is =n;
② if the last element of a is equal to K, the frequency f (n) of the statement (3) is the constant 0.

(5) Worst time complexity and average time complexity
The worst-case time complexity is called the worst time complexity. In general, it is not particularly stated that the time complexity of the discussion is the worst-case time complexity.
The reason for this is that the worst-case time complexity is the upper bound of the algorithm's run time on any input instance, which guarantees that the algorithm will not run longer than any other.
"Example 3." 19 "Find Algorithm" Example 1 8 "in the worst case the time complexity is T (n) =0 (n), which means that for any input instance, the algorithm's run time cannot be greater than 0 (n)."
The average time complexity is the expected run time of the algorithm when all possible input instances are present with equal probabilities.
The common time complexity is ascending by order of magnitude: constant 0 (1), log Order 0 (LOG2N), line order 0 (n), linear logarithm order 0 (nlog2n), square 0 (N2) Cubic 0 (n3) 、...、 K-order 0 (NK), exponent order 0 (2n). Obviously, the algorithm with time complexity of exponential order 0 (2n) is very inefficient and cannot be applied when the n value is slightly larger.
Similar to the discussion of time complexity, the spatial complexity of an algorithm (space complexity) S (n) is defined as the storage space consumed by the algorithm, and it is also a function of the problem size n. Asymptotic spatial complexity is also often referred to as spatial complexity. The time complexity and spatial complexity of the algorithm are called the complexity of the algorithm.

Time complexity and space complexity of computer

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.