66970902
First, the time complexity of the algorithm definition
At the time of the algorithm analysis, the total number of executions of the statement T (N) is a function of the problem size n, which then analyzes the change of T (n) with N and determines the order of magnitude of T (N). The time complexity of the algorithm, which is the time measurement of the algorithm. Note: T (n) =o (f (n)). It indicates that with the increase of the problem n, the growth rate of the algorithm execution time is the same as the growth rate of f (n), which is called the progressive time complexity of the algorithm, which is referred to as the complexity of time. where f (n) is a function of the problem size n.
This method uses capital O () to represent the algorithm's time complexity, which we call the big 0 notation.
Ii. derivation of the large O-order method
1. Replace all the addition constants in the run time with constant 1.
2. In the modified run Count function, only the highest order is preserved.
3. If the highest order exists and is not 1, the constant multiplied by the item is removed. The result is the Big O-order.
Iii. Examples of derivation
1. Constant order
The time complexity of the sequential structure is first. The following algorithm uses the Gaussian theorem to calculate the number of 1,2,......N.
int sum = 0, n = 100; /* Execute once */
sum = (1 + N) * N/2; /* Execute once */
printf ("%d", sum); /* Execute once */
The run-times function of this algorithm is f (n) = 3. Based on our derivation of the 0-order method, the first step is to change the constant term 3 to 1. When preserving the highest order, it is found that there is no highest order at all, so the time complexity of the algorithm is 0 (1).
In addition, let's imagine if the sum of the statements in this algorithm = (1+n) *N/2; There are 10 sentences, and the code given by the example is 3 and 12 times the difference. This is independent of the size of the problem (how many n), the execution time constant algorithm, we call it with O (1) time complexity, also known as the constant order. For a branching structure, whether true or false, the number of executions is constant and does not change with the size of n, so the simple branching structure (not included in the loop structure) is also 0 (1) of the time complexity.
2. Linear order
The loop structure of the linear order is much more complex. To determine the order of an algorithm, we often need to determine the number of times a particular statement or set of statements runs. Therefore, we need to analyze the complexity of the algorithm, the key is to analyze the operation of the loop structure.
In this code, the time complexity of the loop is O (n), because the code in the loop body needs to be executed n times.
int i;
for (i = 0; i < n; i++) {
/* Program step sequence with Time complexity O (1) */
}
3. Logarithmic order
The following code:
int count = 1;
while (Count < N) {
Count = Count * 2;
/* Program step sequence with Time complexity O (1) */
}
Since each count is multiplied by 2, it is a point closer to N. That is, the number of 2 multiplied by greater than N will exit the loop. Get X=logn by 2^x=n. So the time complexity of this cycle is O (logn).
4, Square Order
The following example is a loop nesting, and its inner loop has just been analyzed and the time complexity is O (n).
int I, J;
for (i = 0; i < n; i++) {
for (j = 0; J < N; j + +) {
/* Program step sequence with Time complexity O (1) */
}
}
The outer loop, however, is the internal statement of the time complexity of O (n), which is recycled n times. So the time complexity of this code is O (n^2).
If the number of cycles in the outer loop changes to M, the time complexity becomes O (mXn).
So we can conclude that the time complexity of the loop is equal to the complexity of the loop body multiplied by the number of times the loop runs.
So what is the time complexity of nesting the following loop?
int I, J;
for (i = 0; i < n; i++) {
for (j = i; J < N; j + +) {/* Note j = i instead of 0*/
/* Program step sequence with Time complexity O (1) */
}
}
Because when i=0, the inner loop executes n times, when i = 1 o'clock, executes n-1 times, ... When I=n-1, it was executed 1 times. So the total number of executions is:
With our derivation of the large O-order method, the first, no addition constant is not considered; second, only the highest order is retained, so the time of retention (n^2)/2; The third rule is to remove the constant multiplied by the item, which is to remove 1/2, and finally the time complexity of the code is O (N2).
From this example, we can also get an experience, in fact, understand the big 0 deduction is not difficult, difficult is the number of related operations of the series, which is more to examine your mathematical knowledge and ability.
5, cubic Order
The following example is a triple loop nesting.
int I, J;
for (i = 1; i < n; i++)
for (j = 1; j < N; j + +)
for (j = 1; j < N; j + +) {
/* Program step sequence with Time complexity O (1) */
}
Here loop (1^2+2^2+3^2+......+n^2) = N (n+1) (2n+1)/6 times, according to the above-mentioned large O-order derivation method, the time complexity of O (n^3).
Iv. Common time complexity
The common time-to-ask complexity is shown in the table.
The time complexity that is commonly used is taken from small to large, in turn:
We've talked about it before. O (1) constant order, O (Logn) logarithmic order, O (n) linear order, O (n^2) square order, etc., such as O (n^3), too large n will make the result unrealistic. Same exponential order O (2^n) and factorial order O (n!) Unless it is a very small n value, even if n is only 100, it is a nightmare run time. So this impractical algorithm time complexity, generally we do not discuss.
V. Worst case and average situation
We look for a number in an array of n random numbers, preferably the first number is, then the time complexity of the algorithm is O (1), but it is possible that the number is in the last position to stay, then the algorithm's time complexity is O (n), this is the worst case.
The worst-case run time is a guarantee that the run time will not be broken again. In the application, this is one of the most important requirements, usually, unless specifically specified, the runtime that we refer to is the worst-case run time.
And the average run time is the probability of the point of view, this number in each position is the same probability, so the average search time is N/2 times after the target element is found. The average run time is the most meaningful in all cases because it is the desired run time. In other words, when we run a program code, we want to see the average run time. In reality, the average running time is difficult to get through the analysis, usually by running a certain number of experimental data to be estimated. Generally, in the absence of special instructions, it refers to the worst-case complexity of time.
Vi. Algorithm Space Complexity
When we write code, we can completely use space for time, for example, to determine if a year is not a leap, you may spend a bit of effort to write an algorithm, and because it is an algorithm, it means that each time to give a year, is to calculate whether it is a leap years results. Another way is to create an array of 2050 elements beforehand (a little more than the actual number of years), and then put all the years in the subscript number corresponding, if it is a leap year, the value of this array item is 1, if it is not a value of 0. Thus, the so-called judge whether a year is a leap years, it becomes the problem of finding the value of an item in this array. At this point, our operations are minimized, but we need to store these 2050 0 and 1 on the hard disk or in memory. This is a small trick of calculating time with a space overhead. Which is good, in fact, depends on where you use.
The spatial complexity of the algorithm is realized by calculating the storage space required by the algorithm, and the computational formula for the spatial complexity of the algorithm is as follows: S (n) = O (f (n)), where n is the scale of the problem, and F (n) is the function of the statement about the storage space occupied by N.
In general, when a program executes on a machine, in addition to the instructions, constants, variables, and input data stored in the program itself, it is necessary to store the storage unit for the data operation, if the input data occupies only the problem itself, and the algorithm is independent, so only need to analyze the algorithm in the implementation of the necessary auxiliary units. If the auxiliary space required by the algorithm is a constant relative to the amount of input data, the algorithm is said to work in situ and the spatial complexity is 0 (1).
In general, we use "time complexity" to refer to the need for runtime, using "spatial complexity" to refer to space requirements. When using "complexity" without qualifying words, it usually refers to the complexity of time.
Vii. some rules of calculation
1. Addition rules
T (n,m) = T1 (n) + T2 (m) = O (Max{f (n), G (M)})
2. Multiplication rules
T (n,m) = T1 (n) * T2 (m) = O (Max{f (n) *g (M)})
3, an experience
The relationship between complexity and time efficiency:
C (constant) < Logn < n < N*logn < n^2 < N^3 < 2^n < 3^n < n!
L------------------------------L--------------------------L--------------L
Good General poor
Viii. time complexity and spatial complexity of common algorithms
Complexity o (n) calculation