I. Why should I learn the algorithm?
First, a simple algorithm comparison: sum=1+2+3+...+ (n-1) +n results. Enter integer n, output sum
Solution One: For loop
function sum (n) { var s=0; Executes 1 times for (var i=1;i<n+1;i++) { s+ = i; //execute n+1 times
return s; // executed 1 times
}
Solution Two:
function sum (n) { return N (n-1)/2; Executed 1 times}
It is obvious that solution two is superior to solution one. Because the solution two need to calculate the number of times less. We are going to measure the quality of an algorithm is mainly from the time complexity and spatial complexity of the view, followed by readability, maintainability. So let's talk about how to calculate the complexity of time and space complexity.
Two. Calculation of time complexity:
Derivation of the time-to-time complexity of large o-order
Rule: 1. Replace all the addition constants in the run time with constant 1 (that is, the constant order is counted as O (1));
2. In the modified run Count function, only the highest order is retained;
3. If the highest order exists and is not 1, the constant multiplied by the item is removed
Number of runs of solution one: F1 (n) = 1+n+1+1=n+3 times complexity is recorded as T (n) = O (F1 (n))//n+3 directly out of the constant, to N, called "Linear Order"
Number of runs of solution two: F2 (n) = 1 times The complexity of time is recorded as T (n) = O (F1 (1))//called "Constant order"
It can be concluded that the number of running times increases with the increase of N, and the solution two always only need to run once.
Logarithmic order:
function count (n) { var c=1; Executes 1 times while (c<N) { c =c*2; Execute log2n Times
return C;}
That is, how many times 2 emerges is greater than n, and how many times it runs. The time complexity meter makes O (logn), called the logarithmic order.
Square Order:
function num (n) { var count=0; for (var i=0;i<n;i++) {//execute n times for (var j=i;j<n;j++) { count ++; Execute N-i } } return count;}
The total number of executions of the above code is n+ (n-1) + (n-2) +...+1 = N2/2+N/2 times, with the large O derivation method to remove the additive constant N/2, minus the multiplication constant 1/2, so the time complexity of O (N2)
Summarize:
There are many times of complexity, and here is the common order of discussion. Commonly used time complex miscellaneous time-consuming from small to large, in turn:
O (1) < O (logn) < O (n) < O (nlogn) < O (N2) < O (n3) < O (2n) < O (n!) <o (NN)
Extension: Worst time complexity
Example: Give an array of arr with n random numbers to find the specified number in arr. Then this number may appear in the first position in the array, the time complexity is O (1), it may appear in the last position of the array, the time complexity is O (n), from the probability, the average lookup time should be N/2 times.
The worst time complexity can literally be understood, the longest time, the time will not be longer, the situation will not be worse. There is usually no special explanation, and the time complexity we calculate is the worst time complexity.
Three. Algorithmic spatial complexity
The spatial complexity of the algorithm is not to calculate the actual occupied space, but to calculate the total number of auxiliary space units of the algorithm, and the size of the problem is not related. The spatial complexity of the algorithm s (n) is defined as the order of magnitude of space consumed by the algorithm. S (n) =o (f (n)) if the auxiliary space required by the algorithm to execute is a constant relative to the amount of input data n, the auxiliary space of the algorithm is called O (1). In general, we use time complexity to measure the optimization of the algorithm.
The algorithm basis of data structure (ii)