Complexity of time and space

Source: Internet
Author: User

Reference post: http://blog.csdn.net/xiaoxiaopengbo/article/details/51583386

1. Time frequency : The time consumed by an algorithm execution . It is theoretically necessary to perform a test on the machine, but in fact it only needs to know that the algorithm consumes less time and the algorithm consumes more time. The algorithm spends the time and the number of executions in direct proportion (??? What if a statement is time-consuming and the other statement is not time consuming? , that counts the number of executions, the more time it takes.

The number of statement executions in an algorithm is called the frequency of statements or the frequency of time , recorded as T (n). (The number of times to reflect the time?? )

2, : In the time frequency mentioned earlier, n is called the scale of the problem , and when N is constantly changing, The time frequency t (n) also changes continuously. But sometimes we want to know what it's going to look like when it changes law . To do this, we introduce the concept of time complexity. ( time frequency t (n) Change law ) in general, the number of times the basic operation is repeated in the algorithm is a function of the problem size n, denoted by T (n), if there is an auxiliary function f (n), So that when n approaches Infinity, T (n)/f (n) is a constant that is not equal to zero, or F (n) is a Same as the order of magnitude function . As T (n) =o (f (n)), is called O (f (n))   is style= "color: #ff0000;" > Progressive time complexity , abbreviated time complexity . ( T (n) =o (f (n)) is not equal? O (f (n)) is the time complexity, and that T (n) is not the complexity of time? Isn't that time frequency?? Why is O (f (n)) not called C F (n)?? )

3, t (n) =0 (f (n) ) indicates the existence of a constant C, so that when n tends to infinity, there is always T (n) ≤c * f (n). Simply put, T (N) is almost as large as f (n) when n tends to infinity. That is, when n approaches positive infinity, the upper bound of T (n) is C * f (n). Although there is no provision for f (n), it is generally the simplest possible function. For example, O (2n2+n + 1) = O (3n2+n+3) = O (7n2 + N) = O ( n2) , usually only O ( n2) said it would be all right. Notice that a constant c is hidden in the large O symbol, so there is generally no coefficient in f (n). If T (n) is used as a tree, then O (f (n)) expresses the trunk, only cares about the trunk, and all the other details are discarded.

In a variety of different algorithms, if the algorithm is a constant number of execution times, the time complexity is O (1), in addition, the time frequency is different, the time complexity may be the same, such as t (n) =n2+3n+4 and T (n) =4n2+2n+ 1 They vary in frequency, but have the same time complexity as O (n2). By order of magnitude increment, common time complexity is: constant Order O (1), logarithmic order O ( log2n), linear order O (n), linear logarithmic order O (nlog2n) , Square order O (n2), cubic O (n3),..., K-th square O (nk), exponential order O (2N). with the increasing of the problem scale N, the complexity of the time is increasing and the efficiency of the algorithm is less.

As we can see, we should choose the polynomial order O (nk) algorithm instead of the exponential order algorithm.

Common algorithm time complexity from small to large in order:0 (1) <0 (log2n) <0 (n) <0 (nlog2n) <0 (n2) <0 (n3) ... <0 (2N) <0 (n!)

⑴ find out the basic statements in the algorithm;

The most frequently executed statement in the algorithm is the basic statement, usually the loop body of the most inner loop.

⑵ the order of the number of executions of the base statement;

Simply calculate the order of magnitude of the base statement execution, which means that all coefficients of the lower and highest powers can be ignored as long as the highest power in the function that guarantees the execution of the base statement is correct. This simplifies algorithmic analysis and focuses attention on the most important point: growth rates.

⑶ represents the time performance of the algorithm with a large 0 notation.

Place the order of magnitude of the base statement execution into the large 0 notation.

If the algorithm contains nested loops, the base statement is usually the inner loop body, and if the algorithm contains a parallel loop, the time complexity of the parallel loop is added. For example:

[Java]View Plain Copy
    1. For (i=1; i<=n; i++)
    2. x + +;
    3. For (i=1; i<=n; i++)
    4. For (j=1; j<=n; j + +)
    5. x + +;

The time complexity of the first for loop is 0 (n) and the time complexity of the second for loop is 0 (n2), the time complexity of the entire algorithm is 0 (n+n2) =0 (n2).

0 (1) indicates that the execution number of the base statement is a constant, in general, as long as there is no circular statement in the algorithm, its time complexity is 0 (1). where 0 (log2n), 0 (N), 0 (nlog2n), 0 (n2), and 0 (n3) are called polynomial times, and 0 (2N) and 0 (n!) Called Exponential time . Computer scientists generally believe that the former (that is, polynomial time complexity algorithm) is an effective algorithm, this kind of problem is called P(polynomial, polynomial) class problem , and the latter (that is, exponential time complexity algorithm) called NP (non-deterministic polynomial, non-deterministic polynomial) problem . (This is not the question that stone gods often ask in "The devotion of Suspect X"? P=NP ... )

(5) Time complexity and spatial complexity of common algorithms

One rule of thumb: where C is a constant, if the complexity of an algorithm is C, log2n , N and N,log2n , Then the algorithm time efficiency is high, if it is 2n ,3n , n!, then a slightly larger n will make this algorithm can not move, in the middle of a few are passable.

Algorithm time complexity analysis is a very important problem, any programmer should master its concept and basic method, and be good at the mathematical level to explore its essence, can accurately understand its connotation.

2, the spatial complexity of the algorithm

Similar to the discussion of time complexity, the spatial complexity of an algorithm (space complexity) S (n) is defined as thestorage space consumed by the algorithm, it is also a function of the problem size n. Asymptotic spatial complexity is also often referred to as spatial complexity.
Complexity of Space(Space complexity) is an algorithm in the process of runninga measure of the amount of storage space that is temporarily occupied。 (Is it temporary or consumption, estimated to be occupied, including the following three aspects??? Baidu a bit, seems to be temporarily occupied) aalgorithmOn the computer memoryThe amount of storage space occupied, including storagethe algorithm itself occupiesThe storage space, the algorithm'sthe input/output data occupiesThe storage space andthe algorithm temporarily occupies during OperationThe storage space for thisthree aspects。 The storage space occupied by the input and output data of the algorithm is determined by the problem to be solved, which is passed by the calling function by the parameter table, and it does not change with the algorithm. Storage algorithm itself occupies the storage space and the length of the algorithm written in proportion, to compress the storage space, you must write a shorter algorithm. Algorithm in the running process of temporary occupied storage space varies with the algorithm, some algorithms only need to occupy a small amount of temporary work units, and does not change with the size of the problem, we call this algorithm is "in-place \", is to save the memory of the algorithm, as described in this section of the algorithm is so Some algorithms need to occupy the number of temporary working units and solve the problem of the size of N, which increases with the increase of N,when n is large, it takes up more storage units, for example, in the Nineth chapterThis is the case with fast sorting and merging sorting algorithms.

If the spatial complexity of an algorithm is a constant, that is, it can be represented as O (1) when it is not changed by the size of N of the processed data, and when the spatial complexity of an algorithm is proportional to the logarithm of the base N of 2, it can be represented as 0 (10g2n), and when an algorithm's empty I-division complexity is linearly proportional to N, can be represented as 0 (n). If the parameter is an array, it is only necessary to allocate a space for it to store an address pointer transmitted by the argument, that is, a machine word space, and if the formal parameter is a reference, it is only necessary to allocate a space for it to store the address of the corresponding argument variable. To automatically reference the argument variable by the system.

Complexity of time and space

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.