Various time complexities such as O (N), O (logn), O (nlogn), O (N ^ 2 ),... what is the difference between them? The following figure shows an intuitive expression:
It can be seen that there is a huge difference between the common time complexity. From O (nlogn) to O (N), from O (n) to O (logn), it is a huge leap in performance.
From another perspective, programs with time complexity greater than O (N ^ 2) or O (N ^ 3) Are Actually unavailable. According to Wikipedia, the most powerful CPU can execute about 42.8 billion commands per second (4*10 ^ 10), and for an O (2 ^ N) program, if n = 100, 2 ^ 100 commands are available, that is, 2 ^ 100 = 1.26765*10 30 commands. The CPU size is about 1 trillion years (10 ^ 12 ).
The algorithm running time is usually proportional to the following functions:
1 |
Most of the commands in most programs can be executed once or several times at most. If all the commands of a program have such properties, we say that the execution time of this program is a constant. |
Log n |
It can be seen as a constant: Even if n is many, it will become very small after two logarithm removal. |
Logn |
If a program runs at a logarithm level, the program slows down as N increases. If a program breaks down a large problem into a series of smaller problems, each step reduces the scale of the problem to a fraction, and such a running time function usually appears. Within the scope of our concern, we can consider that the running time is smaller than a large constant. The base number of the logarithm will affect this constant, but the change will not be too big: WHEN n = 1000, if the base number is 10, logn equals 3; if the base number is 2, logn equals 10. when n = 1 00 000, logn is only twice the previous value. When N times, logn increases by only one constant factor: logn only doubles from N to n. |
N |
If the running time of the program is linear, it is likely that a small amount of processing is performed on each input element. When n = 1 000 000, the running time is about this value. When N increases to twice of the original, the running time is about twice the original. If an algorithm must process N inputs (or generate n outputs), this is the best case. |
Nlogn |
If an algorithm breaks down the problem into smaller sub-problems, solves each sub-problem independently, and finally combines the results, the running time is generally nlogn. We can't find a better description, so we can call the running time of this algorithm nlogn for the moment. When n = 1 000 000, nlogn is about 20 000. When N increases to two times, the running time exceeds two times, but not too many times. |
N Square |
If an algorithm runs for a second time (quadratic), it can only be used for small-scale problems. Such running time usually exists in the algorithm that needs to process each pair of input data items (which may be expressed as a nested loop in the Program). When n = 1000, the running time is 1 000 000; if n is doubled, the running time will be quadrupled. |
N to the Power of Three |
Similarly, if an algorithm needs to process the productkey, devicename, and devicesecret of input data (which may be a triple nested loop), the running time is usually three times, which can only be used for small-scale problems. When N = 100, the running time is 1 000 000; if n increases to twice the original running time, it will increase to eight times the original running time. |
The Npower of 2 |
If an algorithm runs at an exponential level (exponential), it is generally difficult to use in practice, even if such an algorithm is usually a direct solution to the problem. When n = 20, the running time is 1 000 000; if it is increased to two times, the running time will be the square of the original time! |