Step-by-Step learning of time complexity and space complexity.

Source: Internet
Author: User

Word Count: 4894 words

It may take 13 minutes to read this article.

 

Before writing 

We all know that there can be a variety of problem-solving algorithms for the same problem. Although the algorithm is not unique, it still exists for the problem itself. Some people may ask what is the standard for distinguishing between good and bad? This should be viewed from the "validity period" and "Storage" aspects.

 

People are always greedy. When we do one thing, we always expect that we can pay the least time, energy, or money to get the maximum return. This analogy is also applicable to algorithms, that is, the best solution is to spend the least time and the least storage,Therefore, good algorithms should have high timeliness and low storage.. The "validity period" here refers to the time efficiency, that is, the algorithm execution time. For different algorithms that solve the same problem, the shorter the execution time, the higher the efficiency, the longer the efficiency, the lower the efficiency. "Storage" refers to the storage space required for Algorithm Execution, mainly refers to the memory space occupied when the algorithm program is running.

 

Time Complexity

First, let's talk about the problem of time efficiency. The time efficiency here refers to the execution time of the algorithm. The speed of time is a relative concept, so it comes to the algorithm, what measures should we use to measure the time efficiency (execution time) of an algorithm?

 

At the beginning, we came up with a post-event statistical method. I called it a "post-mortem". As the name suggests, I tried my best to solve a certain problem, write the algorithm program in advance, collect a pile of data, and let them run on the computer, run the program, and then compare the running time of the program. Based on this, determine the algorithm's timeliness. This judgment technology computes the time we use in our daily lives, but it is not a useful metric for us, because it also depends on the running machine, the programming language used, the compiler, and so on. On the contrary, what we need is a metric that does not depend on the machine or programming language in use. This metric helps us determine the advantages and disadvantages of the algorithm and can be used to compare the specific implementation of the algorithm.

 

Our older generation of scientists found that when we tried to describe an algorithm using execution time as a metric independent of a specific program or computer,It is very important to determine the number of steps required for this algorithm.. If we regard each step in an algorithm program as a basic measurement unit, the execution time of an algorithm can be considered as the total number of steps required to solve a problem. However, because the Algorithm Execution process is different, it is a bald question how to choose this basic unit of measurement.

 

Let's look at a Simple summation function:

 

def get_sum(n):          sum = 0   for i in range(1,n+1):         sum += i   return sumprint(get_sum(10))

 

After carefully analyzing the above code, we can find that the number of times the value assignment statement for calculating the sum may be a good basic counting unit. In the get_sum function above, the number of value assignment statements is 1 (sum = 0) plus N (the number of times sum + = I is executed ).

 

We generally use a function called T to represent the total number of value assignment statements. For example, the above example can be represented as T (n) = n + 1. Here N generally refers to the "size of Data", so the preceding equation can be understood as "solving the problem that a scale is N and corresponds to n + 1 steps, the time required is T (n )」.

 

For N, it can take 10,100,100 0 or a larger number. We all know that it takes more time to solve large-scale problems than to solve small-scale problems, then our next goal is very clear: "How does the running time of the program change with the scale of the problem 」.

 

Our predecessors of scientists have made more profound thoughts on this analysis method.Discovering the impact of a limited number of operations on T (n) is not as important as some operations that occupy a major position.In other words, "when the data size increases, some part of the T (n) function masks the influence of other parts on the function 」. In the end, this dominant part is used to compare functions, so the next step is what we are familiar.Large OIt's time to make the debut.

 

Large O notation

The "order of magnitude" function is used to describe the fastest growing part of the T (n) function when the scale N increases. This order of magnitude function is usually represented by "large O, record as O (f (n )). It provides an approximate value of the actual number of steps in the calculation process. function f (n) is a simplified representation of the dominant part of the original function T (n.

 

In the example of the sum function above, T (n) = n + 1. When N increases, constant 1 becomes invisible to the final result, if we need an approximate value of T (N), all we need to do is to ignore 1 and directly think that the running time of T (n) is O (n ). Here, you must understand that this is not to say that a pair of T (n) is not important, but that when N increases to a large value, the approximate value obtained by dropping 1 is also very accurate.

 

For another example, the T (n) of an algorithm is 2n ^ 2 + 2n + 1000. When n is 10 or 20, constant 1000 looks like T (N) play a decisive role. But what if n is 1000, 10000, or greater? N ^ 2 plays a major role. In fact, when n is very large, the next two items are irrelevant to the final result. Similar to the example of the sum function above, when n is getting bigger, we can ignore other items and only focus on using 2n ^ 2 to represent the approximate value of T (n. Similarly, the role of factor 2 will become smaller and smaller as N increases, so it can be ignored. At this time, we will say that the order of T (n) f (n) = n ^ 2, that is, O (N ^ 2 ).

 

Best, worst, and average

Although the previous two examples do not show, we should note that sometimes the running time of the algorithm depends on "specific data", not just "the size of the problem 」. For such algorithms, we divide their execution into "best condition", "worst case", and "average condition 」.

 

A specific dataset can make the execution of an algorithm very good. This is the "best situation", and another different data will make the execution of the algorithm very poor, this is the "worst case 」. However, in most cases, the execution of algorithms is between these two extreme situations, that is, the "average situation 」. Therefore, you must understand the differences between different situations and avoid the rhythm of extreme situations.

 

There is no great value for "optimal condition", because it does not provide any useful information. It only reflects the most optimistic and ideal situation and has no reference value. The "average situation" is a comprehensive evaluation of the algorithm because it fully reflects the nature of the algorithm, but on the other hand, this measurement is not guaranteed, not every operation can be completed in this case. For the "worst case", it provides a guarantee that the running time will not be broken, ** generally, the time complexity we calculate is the worst-case time complexity **. This is the same reason that we usually consider the worst case.

 

In our later algorithm learning process, we will encounter various order-of-magnitude functions. Below I will list several common order-of-magnitude functions:

 

 

To determine which of these functions are dominant in T (N), we need to compare them when N increases. See (picture from Google image ):

 

 

In the middle, we can see that when n is very small, it is difficult to distinguish between functions, it is difficult to say who is in the dominant position, but when N increases, we can see the obvious difference, who is the boss:

 

O (1) <O (logn) <O (n) <O (nlogn) <O (N ^ 2) <O (N ^ 3) <O (2 ^ N)

 

We will analyze several "order of magnitude functions" mentioned above 」:

 

1. Constant Functions

 

N = 100 #1 sum = (1 + n) * n/2 #1 print (SUM) #1

 

F (n) = 3 of the above algorithm program. Someone may see that the time complexity is O (f (N) = O (3). In fact, this is wrong, the time complexity of this function is actually O (1 ). This is a hard-to-understand result for beginners. In fact, you can copy sum = (1 + n) * n/2 multiple times to see it again:

 

A = 100 #1 sum = (1 + n) * n/2 #1 sum = (1 + n) * n/2 #1 sum = (1 + n) * n/2 #1 sum = (1 + n) * n/2 #1 sum = (1 + n) * n/2 #1 sum = (1 + n) * n/2 #1 print (SUM) # Once

 

The F (n) of the above algorithm is 8. In fact, no matter how much N is, the above two pieces of code are the difference between running three times and running eight times. This algorithm is independent of the data size N. The algorithm with a constant execution time is called O (1) time complexity. Regardless of the number of constants, we can record them as O (1) instead of O (3) or O (8 ).

 

2. logarithm Functions

 

cnt = 1while cnt < n:   cnt *= 2 # O(1)

 

The time complexity of the above algorithm program is O (logn). How is this calculated? In fact, it is very simple: the above Code can be interpreted as the number of CNT multiplied by 2 to be greater than or equal to n. We assume that the number is X, that is, 2 ^ x = N, that is, x = log2n, therefore, the time complexity of this loop is O (logn ).

 

Finally, let's take a look at the example below. With this code, we will detail how we can analyze the time complexity in detail:

 

a = 1b = 2c = 3for i in range(n):   for j in range(n):       x = i * i       y = j * j       z = i * jfor k in range(n):   u = a * k + b   v = c * cd = 4

 

The above code is meaningless, not even a runable code. I just want to explain how you can analyze the code in the future. The Code itself cannot be run, you don't need to care about it here.

 

The above code can be divided into four parts: Part 1 is the three value assignment statements A, B, and C, which are executed three times; the second part is 3n ^ 2, because it is a loop structure, which contains the three value assignment statements X, Y, and Z, each statement is executed n ^ 2 times; the second part is 2n, because there are two value assignment statements, each statement is executed n times; the last part is constant 1, only one value assignment statement such as D. Therefore, the obtained t (n) = 3 + 3N ^ 2 + 2n + 1 = 3N ^ 2 + 2n + 4 shows the exponent, we naturally find that N ^ 2 is dominant. When N increases, the next two items can be ignored, so the order of magnitude of this code snippet is O (n ^ 2 ).

 

Space complexity

Compared with the discussion of time complexity, the spatial complexity of an algorithm refers to the storage space consumed by the algorithm. The formula is calculated as: S (n) = O (f (n )). N is also the data size. F (n) here refers to the function of the storage space occupied by N.

 

In general, when our program runs on a machine, it also needs to store the "storage unit" for data operations in addition to the input data of the stored program 」. If the space occupied by the input data is irrelevant to the algorithm and only depends on the problem itself, you only need to analyze the "auxiliary unit" occupied by the algorithm in the implementation process. If the necessary auxiliary unit is a constant, the space complexity is O (1 ).

 

In fact, the concept of space complexity is more here. Because the storage size of today's hardware is relatively large, it is generally not used to reduce a little space complexity, it is more about how to optimize the time complexity of the algorithm. Therefore, we developed the "space for Time" approach during daily code writing and became the norm. For example, when we are solving the Fibonacci series, we can use the formula to recursively find out which one to use. We can also compute and save a lot of results first, which direct call is then used? This is a typical practice of changing the space for time. But if you say which of the two is better, the great Marx tells us "specific analysis of specific problems 」.

 

After writing

If you read the above article carefully, you will find that I did not come up directly to tell you how to find time complexity, but from the problem generation to the solution, to the "horse", then to T (n), and finally to O (n) step by step. There are two reasons for doing so: one is to let you know how big O came from, and sometimes understand the origin, it will be of great help to your next study and understanding. Second, to make this article look not so boring, I think many times I will throw you a bunch of conceptual terms, it is easy for people to get out of the box when they first see it, gradually and gradually guide and make it easier to accept.

 

Many people, from college to work, write a lot of code and still don't estimate the time complexity. I don't feel like I can't learn it, but I don't really pay attention to it. You may think that the computer is updated quickly, and the CPU processing speed is getting better and better. You don't have to worry about some small aspects. In fact, I think you are too young too naive. Let's take a simple example: there are two computers, and your computer is 100 times faster than my computer. In the same case, I think about O (n) as you can do, you must be lazy and directly use the brute force O (N ^ 2). Then, when n data increases slightly, for example, over 100,000 of the tens of thousands, who can tell you how fast the computation speed is?

 

So when writing an algorithm in the future, please learn to estimate your code with time complexity, and then think about whether there is a more efficient way to improve it, as long as you do, I believe that your code will get better and better, and your head will get bald. (Escape

 

The last point is that you should not expect to understand the complexity of the algorithm after reading an article at a glance. This requires more conscious training, for example, when you see a program, you can consciously estimate its complexity. When you are ready to write code, you can also think about whether there are better optimization methods. The conscious exercises will gradually feel like this. I used a few small examples in this article. This is the approximate estimation method. Later, I will continue to write some articles related to "data structures and algorithms" and some specific practical questions, which will help you analyze their time complexity.

 

You are welcome to pay attention to the Public Account "Python space", a public account that adheres to the original technology, focuses on Python programming, and pushes various basic/advanced Python articles, data analysis, and crawler practices every day, data structures and algorithms. Resources are occasionally shared. We look forward to communicating with you.

 

 

Step-by-Step learning of time complexity and space complexity.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.