Transferred from: http://blog.csdn.net/xiexievv/article/details/45795719
The sorting algorithms used in computer science are usually categorized as:
- The calculated time complexity (worst, average, and best performance), based on the size of the list (n). Generally, good performance is O (n log n), and bad performance is O (n2). The desired performance for a sort is O (n). Sorting algorithms that use only one abstract key comparison operation always have at least O (n logn) on the total average.
- Memory usage (and use of other computer resources)
- Stability: A stable sorting algorithm maintains a relative order of records that have equal key values. That is, if a sorting algorithm is stable , when there are two equal key values of the record R and s, and in the original list R appears before S , in the sorted list R will also be before S .
- Depending on the Sort method: Insert, Swap, select, merge, and so on.
stability
Stability is not a problem when equal elements are indistinguishable, such as integers. However, suppose the following pairs of numbers will be sorted by their first number.
(4, 1) (3, 1) (3, 7) (5, 6)
In this situation, it is possible to produce two different results, one is to maintain the relative order of the record of the equal key value, while the other one does not:
(3, 1) (3, 7) (4, 1) (5, 6) (Order of Holding) (3, 7) ( 3, 1) ( 4, 1) (5, 6) ( order changed)
The unstable sorting algorithm may change the relative order of records in equal key values, but the stable sort algorithm never does. The unstable sorting algorithm can be specially implemented as stable. One way to do this is to artificially expand the comparison of the key values, so that in other respects the same key value of the comparison between two objects (such as the above comparison to add a second criterion: The second key value of the size) will be determined to use in the original data order of the entries, as a final. However, keep in mind that this order usually involves additional space burdens.
List of sorting algorithms
In this table,n is the number of records to be sorted and K is the number of different key values.
a stable sort
- Bubble sort (bubble sort)-O ( n 2)
- cocktail sort (cocktail sort)-O ( n 2)
- Insert sort (insertion sort) -O ( n 2)
- bucket sort (bucket sort)-O ( n ), additional space required O ( k )
- count sort (counting sort)-O ( n + k ); require O ( n + K ) additional space
- merge sort (merge sort)-O ( n log N Requires O ( n ) Extra space
- in-place sort-O ( n 2)
- Two fork sort tree sort (binary tree sort)-O ( n log n ) expected time; O ( n 2) worst time; requires O ( n ) extra space
- pigeon nest sort (pigeonhole sort)-O ( n + k ); O ( k ) Extra Space
- cardinality sort (radix sort)-O ( n • K ); requires O ( n ) extra space
- Gnome sort (gnome sort)-O ( n 2)
- Library Sort-time complexity is usually O ( n< /EM>&NBSP;LOG&NBSP n ), required (1+ε) n Extra Space
Unstable sort
- Select sort (Selection sort)-O (n2)
- Hill sort (shell sort)-O (n log2 n) If the best current version is used
- Clover Sorting algorithm (Clover sort)-O (n) desired time, O (N^2/2) worst case
- Comb sort-O (n log n)
- Heap sort (heap sort)-O (n log n)
- Smooth sort (smooth sort)-O (n log n)
- Fast sorting (Quick sort)-O (n log n) expected time, O (n2) worst case scenario; generally believed to be the fastest known sort for large, disorderly lists
- Introspective sort (introsort)-O (n log n)
- Patient sort (patience sort)-O (n log n + k) Worst case time, requires extra O (n + K) space, Also need to find the longest increment subsequence (longest increasing subsequence)
Not useful for sorting
- Bogo sort-O (n x n!), in the worst case, the expected time is infinite.
- Stupid sort-O (n3); The recursive version requires O (n2) extra memory
- Bead sort (bead sort)-O (n) or O (√n), but requires special hardware
- Pancake sort-O (n), but requires special hardware
- The Smelly Cobbler sort (stooge sort) algorithm is simple, but takes about n^2.7 time
Average Time complexity
The average time complexity is from high to Low:
- Bubble sort O (n2)
- Select Sort O (n2)
- Insert sort O (n2)
- Hill sort O (n1.25)
- Heap sort O (n log n)
- Merge sort O (n log n)
- Quick Sort O (n log n)
- Cardinal Sort O (n)
Note: Although in the case of full reverse order, the fast sort will be reduced to the speed of selection sorting, but from a probabilistic point of view (reference to information theory, and probability), the algorithm does not optimize programming, the speed of the fast sorting faster than the heap sort.
name |
Data Objects |
Stability |
Complexity of Time |
Complexity of Space |
Description |
Average |
Worst |
Bubble sort |
Array |
|
|
|
(unordered area, ordered area). Find the largest element from the unordered zone by swapping to place the front end of the ordered area. |
Select sort |
Array |
|
|
|
(Ordered area, unordered area). Find the smallest element in the unordered zone that follows the ordered area. Logarithmic group: Much more, less in exchange. |
Linked list |
|
Insert Sort |
arrays, linked lists |
|
|
|
(Ordered area, unordered area). Inserts the first element of the unordered area into the appropriate position in the ordered area. Logarithmic group: Less, much more. |
Heap Sort |
Array |
|
|
|
(maximum heap, ordered area). Remove the root from the top of the heap and put it in the ordered area before recovering the heap. |
Merge sort |
Array |
|
|
, if not from bottom to top |
Divide the data into two segments, selecting the smallest element from the two paragraphs to move to the end of the new data segment. Can be done from top to bottom or from bottom to top. |
Linked list |
|
Quick Sort |
Array |
|
|
|
|
(decimal, Pivot element, large number). |
Hill sort |
Array |
|
|
|
|
Each round is inserted in a predetermined interval, and the interval is reduced in turn, and the last must be 1. |
|
|
Count sort |
arrays, linked lists |
|
|
|
Counts the number of elements that are less than or equal to the element's value, and the element is placed in the index I bit (i≥0) of the target array. |
Bucket sort |
arrays, linked lists |
|
|
|
Place the element with the value I in the I bucket, then pour out the elements of the bucket in turn. |
Base sort |
arrays, linked lists |
|
|
|
|
A sort algorithm of multi-key words, which can be implemented by bucket sort. |
-
- are arranged by small to large
- K represents the number of "digits" in a value
- n Represents data size
- m represents the maximum value of the data minus the minimum value
Complexity of sorting algorithms