The time complexity and spatial complexity of algorithm [algorithm technology]

Source: Internet
Author: User

1. Complexity of Time

The time complexity of the algorithm is the basic method to measure the efficiency of an algorithm. While reading other algorithmic tutorials, the time-complexity of the algorithm is somewhat jerky and difficult to understand. It is not possible to measure the algorithm well in practical applications.

"Big talk Data Structure" a book at the beginning also for the algorithm time complexity is explained. The explanation here is very clear, concise and easy to understand. Following through the "Big talk data Structure" reading notes, the reason for the book by some simple examples and explanations to explain the algorithm's time complexity and its calculation method.

First, from the basic definition, to understand what is "the time complexity of the algorithm", "Big talk data Structure" in the book of the algorithm's time complexity is defined as follows:

The total number of executions of the algorithm statement T (n) is a function of the problem size n, which then analyzes the change of T (n) with N and determines the order of magnitude of T (N). The time complexity of the algorithm, which is the time measurement of the algorithm, is recorded as: T (n) = O (f (n)) it indicates that the growth rate of the algorithm execution time and the growth rate of f (n) are the same as the increase of the problem size n, called the algorithm The progressive time complexity, referred to as the time complexity. where f (n) is a function of the problem size n. ”

It is difficult to understand the time complexity of the algorithm from the definition, and we will combine a simple example to illustrate it. Calculate 1 + 2 + 3 + 4 + ... + 100 =? This kind of problem must have met, here we use C language in the simplest way to implement the algorithm of this problem.

int sum = 0, n = 100; Executed 1 times.

for (int i = 1; I <= n; i++) {//n + 1 executed

sum + = i; executed n times.

}

printf ("sum =%d", sum); Executed 1 times.

Comments attached from the code can see how many times all the code has been executed. Then the sum of the execution times of the code statement can be interpreted as the time it takes for the algorithm to calculate the result. So said the above settlement 1 + 2 + 3 + 4 + ... + 100 =? Algorithm (the total number of times the algorithm statement executes) is:

1 + (n + 1) + n + 1 = 2n + 3

And when n grows, for example, what we're going to calculate this time is not 1 + 2 + 3 + 4 + ... + 100 =? Instead of 1 + 2 + 3 + 4 + ... + n =? where n is a very large number, then the total number of executions (time required) of the above algorithm increases with the increase of N, but statements outside the For loop are not affected by the size of N (always executed only once). So we can simply record the total number of executions of the above algorithm:

2n or précis-writers n

So we got our design calculation 1 + 2 + 3 + 4 + ... + 100 =? Algorithm of time complexity, we write it as:

O (N)

For the same problem, the solution is usually not unique. such as 1 + 2 + 3 + 4 + ... + 100 =? This problem, there are many other algorithms. Let's look at a mathematician Gauss's algorithm for solving this problem (presumably everyone is familiar with the story).

SUM = 1 + 2 + 3 + 4 + ... + 100

SUM = 100 + 99 + 98 + 97 + ... + 1

sum + sum = 2*sum = 101 + 101 + 101 + .... + 101 just 100 101

SUM = (100*101)/2 = 5050

We also translate this solution into the C language code:

int n = +, sum = 0; Performed 1 times

sum = (n + 1)/2; Performed 1 times

printf ("sum =%d", sum); Performed 1 times

So we target the same 1 + 2 + 3 + 4 + ... + 100 =? Problem, the different algorithms to the time complexity of an algorithm:

O (3) is generally recorded as O (1) We follow to give reasons.

From the senses, it is not difficult to see, from the efficiency of the algorithm, O (3) < O (n), so Gaussian algorithm faster, more excellent (is the best?). )。

This notation of the time complexity of the algorithm using a capital O is called the "big O-order" notation in a professional name. Then, by summarizing the above examples, we give the calculation method of the algorithm's time complexity (large o-order).

To derive the "Big O-order" step:

1. Replace all the addition constants in the run time with constant 1.

2. In the modified run Count function, only the highest order is preserved.

3. If the highest order exists and is not 1, the constant multiplied by the item is removed.

In the following example, we will calculate the time complexity of the algorithm according to the "Large O-order" method given above. Let's take a look at the code for the example below, which is written in C, and we still explain the number of executions in the comments.

int n = 100000; Executed 1 times.

for (int i = 0; i < n; i++) {//execute n + 1 times

for (int j = 0; J < N; j + +) {//executed n (n+1) times

printf ("I =%d, j =%d\n", I, j); Executed n*n times.

}

}

for (int i = 0; i < n; i++) {//execute n + 1 times

printf ("I =%d", i); executed n times.

}

printf ("Done"); Executed 1 times.

The above code is strictly not called an algorithm, after all, it is very "boring and inexplicable" (after all, the algorithm is designed to solve the problem), regardless of the "algorithm" can solve what problem, we look at its "Big O-order" how to deduce, or first calculate its total number of executions:

Total number of executions = 1 + (n + 1) + N (n + 1) + N*n + (n + 1) + 1 = 2n^2 + 3n + 3 here n^2 represents the 2-th side of N.

Follow the steps above to derive "large O-order" first step: "Replace all the addition constants in the runtime with constant 1", then the above calculation becomes:

Total number of executions = 2n^2 + 3n + 1 here n^2 represents the 2-time Square of N

Step two: "In the modified run Count function, keep only the highest order". The highest order here is two times of N, so the calculation becomes:

Total number of executions = 2n^2 here n^2 represents the 2-time Square of N

Step three: "If the highest order exists and is not 1, then the constant multiplied by this item is removed." Here n is two times not 1 so to remove the multiplication constant of this item, the calculation becomes:

Total number of executions = n^2 here n^2 represents the 2-time Square of N

So finally we get the algorithm time complexity of the above code is expressed as: O (n^2) here n^2 represents N 2 times.

At this point, we have a simple description of what is the "time complexity of the algorithm" and "The time complexity of the algorithm" notation "large O-order" derivation method. Of course, in the future in the actual work to quickly and accurately deduce the various algorithms of the "Big O-order" We also need to make a lot of contact, after all, practice makes perfect. In the end, we have an intuitive understanding of the efficiency of the algorithm by recording the time complexity of the common algorithms and their high and low order in efficiency.

O (1) constant Order < O (Logn) logarithmic order < O (n) linear order < O (Nlogn) < O (n^2) square Order < O (n^3) < {O (2^n) < O (n!) < O (n^n)}

Last three items I enclose them in curly braces to tell you. If you design the algorithm in the future to derive the "Big O-order" is the braces in the few, then give up this algorithm, in the study of new algorithms come out. Because those in the curly braces still take a lot of time, even when the size of n is smaller, the complexity of the algorithm is ridiculously large, which is basically "unusable."

2. Complexity of space

Spatial complexity (spacecomplexity) is a measure of how much storage space is temporarily occupied by an algorithm while it is running, in the case of S (n) =o (f (n)).

For example, the time complexity of direct insertion sequencing is O (n^2), and the spatial complexity is O (1). The general recursive algorithm will have O (n) space complexity, because each recursive to store the return information. The advantages and disadvantages of an algorithm are mainly measured from the execution time of the algorithm and the storage space required to occupy two.

Similar to the discussion of time complexity, an algorithm's spatial complexity (spacecomplexity) S (n) is defined as the storage space consumed by the algorithm, and it is also a function of the problem size n.

Asymptotic spatial complexity is also often referred to as spatial complexity.

Spatial complexity (spacecomplexity) is a measure of the amount of storage space that is temporarily occupied by an algorithm while it is running. The storage space occupied by an algorithm in the computer memory, including the storage space occupied by the storage algorithm itself, the storage space occupied by the input and output data of the algorithm and the storage space occupied by the algorithm in the running process three aspects.

The storage space occupied by the input and output data of the algorithm is determined by the problem to be solved, which is passed by the calling function by the parameter table, and it does not change with the algorithm. Storage algorithm itself occupies the storage space and the length of the algorithm written in proportion, to compress the storage space, you must write a shorter algorithm.

Algorithm in the running process of temporary occupied storage space varies with the algorithm, some algorithms only need to occupy a small amount of temporary work units, and does not change with the size of the problem, we call this algorithm is "in-place \", is to save the memory of the algorithm, as described in this section of the algorithm is so;

Some algorithms need to occupy the number of temporary working units and solve the problem of the size of N, it increases with the increase of N, when n is large, will occupy more storage units, such as the Nineth chapter described in the fast sorting and merging sorting algorithm is the case.

Analyzing the storage space occupied by an algorithm should be considered synthetically. For recursive algorithm, generally is relatively short, the algorithm itself occupies less storage space, but the runtime needs an additional stack, thus occupying more temporary work units, if written as a non-recursive algorithm, generally may be longer, the algorithm itself occupies more storage space, but the runtime will probably need less storage units.

The spatial complexity of an algorithm only considers the size of the storage space allocated for the local variables during the run, including the storage space allocated for the parametric in the parameter table and the storage space allocated for the local variables defined in the function body two parts.

If an algorithm is a recursive algorithm, its spatial complexity is the size of the stack space used by recursion, which is equal to the size of the temporary storage space allocated for a call multiplied by the number of times called (that is, the number of recursive calls plus 1, and this 1 represents a non-recursive call to start).

The spatial complexity of the algorithm is usually given in order of magnitude. If the spatial complexity of an algorithm is a constant, that is, it does not change with the size of the processed data n, it can be represented as O (1); When the spatial complexity of an algorithm is proportional to the logarithm of the base N of 2, it can be represented as O (log2n);

When the spatial complexity of an algorithm is linearly proportional to n, it can be represented as O (n). If the parameter is an array, it is only necessary to allocate a space for it to store an address pointer transmitted by the argument, that is, a machine word space;

If the parameter is a reference, it is also only required to allocate a space for it to store an address, which is used to store the address of the corresponding argument variable, so that the argument variable is automatically referenced by the system.

3. Comparison of time and space complexity

For an algorithm, its time complexity and spatial complexity are often influenced by each other.

When the pursuit of a better time complexity, the performance of the spatial complexity may be poor, that is, may lead to more storage space; Conversely, when the pursuit of a better spatial complexity, the performance of time complexity may be poor, which may lead to a longer run time.

In addition, all the performance of the algorithm has more or less mutual influence. Therefore, when designing an algorithm (especially large algorithm), we should consider the performance of the algorithm, the frequency of use of the algorithm, the size of the data amount processed by the algorithm, the characteristics of the algorithm description language, the machine system environment of the algorithm running, and so on, to design a better algorithm. The time complexity and spatial complexity of the algorithm are called the complexity of the algorithm.

The time complexity and spatial complexity of algorithm [algorithm technology]

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.