Comparison of various sorting algorithms
Various common sorting algorithms |
Category |
Sorting methods |
Complexity of Time |
Complexity of space |
Stability |
Complexity |
Characteristics |
Best |
Average |
Worst |
Secondary storage |
|
Simple |
|
Insert Sort |
Insert directly |
O (N) |
O (N2) |
O (N2) |
O (1) |
Stability |
Simple |
|
Hill sort |
O (N) |
O (N1.3) |
O (N2) |
O (1) |
Not stable |
Complex |
|
Choose Sort |
Direct selection |
O (N) |
O (N2) |
O (N2) |
O (1) |
Not stable |
|
|
Heap Sort |
O (N*LOG2N) |
O (N*LOG2N) |
O (N*LOG2N) |
O (1) |
Not stable |
Complex |
|
interchange sort |
bubble sort |
o (N) |
o (N2) |
o (N2) |
o (+) |
stable |
simple |
1, bubble sort is a time-to-space sorting method, N-hour good 2, the worst case is the order of the sequence into reverse, or the order of the inverse of the sequence, the worst time complexity O (n^2) just to indicate the number of times its operations 3 The best case scenario is that the data is inherently ordered and the complexity is O (n) |
quick sort |
o (n*log2n) |
o (n*log2n) |
o (N2) |
Span style= "color: #0000ff;" >o (log2n) ~o (n) |
unstable |
complex |
1, n good, fast sorting compared to occupy memory, memory increases with the increase of N, but it is an efficient and highly unstable sorting algorithm. 2, after dividing one side is a, side is n-1, the time complexity of this extreme situation is O (n^2) 3, the best case is to evenly divide the sequence every time, O (n*log2n) |
merge sort |
o (n*log2n) |
Span style= "color: #0000ff;" >o (n*log2n) |
o (n*log2n) |
o (n) |
stable |
complex |
1, n good, merge compared to occupy memory, memory increases with the increase of N, but it is a high efficiency and stable sorting algorithm. |
Base sort |
O (d (r+n)) |
O (d (r+n)) |
O (d (r+n)) |
O (Rd+n) |
Stability |
Complex |
|
Note: R for the keyword cardinality, d for length, N for the number of keywords |
Note:
1, merge sort each recursive will use a secondary table, the length of the table to be sorted the same length, although the number of recursion is O (log2n), but each recursion will release the occupied secondary space,
2, the rapid sorting space complexity is only in the usual case O (log2n), if the worst case, it is clear that the O (n) space. Of course, the spatial complexity can be reduced to O (log2n) by randomization of the pivot selection.
Related concepts:
1. Complexity of Time
Time complexity can be thought of as the total number of operations on the sorted data. Reflects the regularity of the number of operations when n changes.
Common time complexities are: constant order O (1), Logarithmic order O (log2n), linear order O (n), linear logarithmic order O (nlog2n), square order O (N2)
Time complexity O (1): The number of statements executed in the algorithm is a constant, the time complexity is O (1),
2. Complexity of space
Spatial complexity is a measure of the amount of storage space required for an algorithm to execute within a computer, and it is also a function of problem size n
Space complexity O (1): When the spatial complexity of an algorithm is a constant, i.e. it is not changed with the size of the data volume n being processed, it can be represented as O (1)
Space complexity O (log2n): When the spatial complexity of an algorithm is proportional to the logarithm of the base 2 N, it can be represented as O (log2n)
Ax=n, then X=logan,
Space complexity O (n): When the spatial complexity of an algorithm is linearly proportional to n, it can be represented as 0 (n).
Time complexity of various sorting algorithms