Sorting algorithms can be divided into internal and external sorting, the internal sorting is the data records in memory to sort, and the external sort is because of the data is very large, one can not hold all the sort records, in order to access the external memory.
Common internal sorting algorithms are: Insert sort, hill sort, select sort, bubble sort, merge sort, quick sort, heap sort, cardinality sort, etc.
Algorithm one: Insert sort
Insert Sort
Insertion sorting is the simplest and most intuitive sort algorithm, which works by constructing an ordered sequence, for unsorted data, to scan from backward forward in a sorted sequence, to find the appropriate position and insert.
Algorithm steps:
1) Consider the first element of the first order sequence as an ordered sequence, and the second element to the last element as an unordered sequence.
2) scan the unordered sequence from beginning to end, inserting each element scanned into the appropriate position of the ordered sequence. (If the element you want to insert is equal to an element in an ordered sequence, insert the element you want to insert behind the equal element.) )
Algorithm two: Hill sort
Hill sort
Hill sort , also called descending incremental sorting algorithm, is a more efficient and improved version of insertion sequencing. But Hill sort is a non-stable sorting algorithm.
The hill sort is based on the following two-point nature of the insertion sort, which proposes an improved method:
The insertion sort is efficient in the case of almost sequenced data operations, i.e. the efficiency of linear sequencing can be achieved
But the insertion sort is generally inefficient because the insertion sort can only move data one at a time
The basic idea of Hill sort is: First, the whole sequence of records is divided into several sub-sequences to be directly inserted into the sort, and then the whole record is sequentially inserted in order when the records are "basically ordered" .
Algorithm steps:
1) Select an incremental sequence T1,T2,...,TK, where ti>tj,tk=1;
2) According to the number of increment series K, the sequence of K-trip sequencing;
3) Each order, according to the corresponding increment ti, the waiting sequence is divided into several sub-sequences of length m, respectively, the sub-table is directly inserted sort. Only the increment factor is 1 o'clock, the entire sequence is treated as a table, and the length of the table is the length of the entire sequence.
Algorithm three: Select sort
Select sort
Select Sort (Selection sort) is also a simple and intuitive sorting algorithm.
Algorithm steps:
1) First find the smallest (large) element in the unordered sequence, and place it in the starting position of the sort sequence
2) continue to find the smallest (large) element from the remaining unsorted elements, and then place it at the end of the sorted sequence.
3) Repeat the second step until all the elements are sorted.
Algorithm four: Bubble sort
Bubble sort
Bubble Sort (Bubble sort) is also a simple and intuitive sorting algorithm. It repeatedly visited the sequence to sort, comparing two elements at a time, and swapping them out if they were wrong in the order. The work of the sequence of visits is repeated until no more need to be exchanged, that is, the sequence is sorted. The algorithm is named because the smaller elements will slowly "float" through the switch to the top of the sequence.
Algorithm steps:
1) compare adjacent elements. If the first one is bigger than the second one, swap them both.
2) for each pair of adjacent elements to do the same work, from the beginning of the first pair to the end of the last pair. When this is done, the final element will be the maximum number.
3) Repeat the above steps for all elements except the last one.
4) Repeat the above steps each time for less and fewer elements until there is no pair of numbers to compare.
Algorithm Five: Merge sort
Merge sort
Merge sort is an efficient sorting algorithm based on merging operations. This algorithm is a very typical application of the partition method (Divide and Conquer).
Algorithm steps:
The space is applied to the sum of two sorted sequences, which are used to store the merged sequence
Set two pointers where the initial position is the starting position of the two sorted series
Compare the elements pointed to by two pointers, select a relatively small element into the merge space, and move the pointer to the next position
Repeat step 3 until a pointer reaches the end of the sequence
Copies all remaining elements of another sequence directly to the end of the merge sequence
Algorithm Six: Quick sort
Quick Sort
Fast sequencing is a sort of algorithm developed by Donny Holl. On average, sort n items to 0 (n log n) comparisons. In the worst case scenario, a 0 (N2) comparison is required, but this is not a common situation. In fact, fast sequencing is usually much faster than the other 0 (n log n) algorithms because its internal loop (inner Loop) can be implemented efficiently on most architectures.
Quick sort uses the divide-and-conquer (Divide and conquer) strategy to divide a serial (list) into two sub-serial (sub-lists).
Algorithm steps:
1 Select an element from the series, called the "Datum" (pivot),
2 Reorder the columns, where all elements are placed in front of the datum in a smaller position than the base value, and all elements are larger than the base value behind the datum (the same number can be on either side). After the partition exits, the datum is in the middle of the sequence. This is called partition (partition) operation.
3 recursively (recursive) sorts sub-columns that are less than the base value element and that are larger than the base value element.
At the bottom of the recursive scenario, the size of the sequence is 0 or one, which is always sorted. Although it is always recursive, the algorithm always exits, because in each iteration (iteration), it will at least put an element to its last position.
Algorithm Seven: Heap sequencing
Heap Ordering (heapsort) refers to a sort algorithm designed using the data structure of the heap. A heap is a structure that approximates a complete binary tree and satisfies the properties of the heap at the same time: that is, the key value or index of the child node is always less than (or greater than) its parent node.
The average time complexity for heap sorting is 0 (NLOGN).
Algorithm steps:
1) Create a heap h[0..n-1]
2) Swap the stack head (maximum) with the end of the heap
3) Reduce the size of the heap by 1 and call Shift_down (0) to adjust the new array top data to the corresponding position
4) Repeat step 2 until the size of the heap is 1
Algorithm eight: Cardinal sort
Radix sorting is a non-comparative integer sorting algorithm, which is based on cutting the number of digits into different numbers and then comparing them by each bit. Because integers can also express strings (such as names or dates) and floating-point numbers in a particular format, the cardinality sort is not only used for integers.
Before we say the Cardinal sort, we'll simply describe the bucket sort:
algorithm idea: is to divide the array into a finite number of buckets. Each bucket is sorted separately (it is possible to use a different sorting algorithm or to sort by using the bucket sort recursively). Bucket sequencing is an inductive result of pigeon nest sorting. When the values in the array to be sorted are evenly distributed, the bucket sort uses linear time (Θ (n)). But the bucket sort is not a comparison sort, and he is not affected by the O (n log n) lower bound.
Simply put, the data is grouped, placed in a bucket, and then the inside of each bucket is sorted.
For example, to sort n integers in the [1..1000] range of size A[1..N]
First, the bucket can be set to a range of size 10, specifically, set the set b[1] store [1..10] integer, set b[2] Store (10..20] integer, ... Set B[i] Store ((i-1) *10, i*10] integer, i =,.. 100. There are a total of 100 barrels.
Then, scan the A[1..N] from beginning to end, and put each a[i] into the corresponding bucket b[j]. Then the 100 barrels in each bucket in the number of sorting, then can be bubbling, selection, and even fast, in general, any sort method can be.
Finally, the numbers in each bucket are output sequentially, and the numbers in each bucket are output from small to large, so that a sequence of all the numbers is ordered.
Suppose there are n numbers, there are m buckets, and if the numbers are evenly distributed, there is an average number of n/m in each bucket. If
The number in each bucket is quickly sorted, so the complexity of the whole algorithm is
O (n + M * N/m*log (n/m)) = O (n + nlogn–nlogm)
As seen from the above, when M approaches N, the sorting complexity of buckets is close to O (n)
Of course, the calculation of the above complexity is based on the assumption that the input n numbers are evenly distributed. This hypothesis is very strong, the actual application of the effect is not so good. If all the numbers fall into the same bucket, it will degenerate into a general sort.
Some of the above-mentioned sorting algorithms, most of the time complexity are O (N2), there are some sorting algorithm time complexity is O (NLOGN). But the bucket sort can realize the time complexity of O (n). But the downside of bucket sequencing is:
1) First, the space complexity is higher, the additional overhead is required. Sorting has two of the space cost of the array, one for the array to be sorted, and one is the so-called bucket, such as the value to be sorted from 0 to m-1, then need M bucket, this bucket array will be at least m space.
2) The next element to be sorted must be within a certain range and so on.
Summarize
The stability of various sorts, time complexity, space complexity, stability summary such as:
About the complexity of time:
(1) Order of Square (O (n^2))
Types of simple sorting: direct insertion, direct selection and bubble sorting;
(2) Order of linear Logarithmic order (O (NLOG2N))
Quick sorting, heap sorting, and merge sorting;
(3) O (n^ (1+§)) ordering, § is a constant between 0 and 1.
Hill sort
(4) Order of Linear Order (O (n))
The base sort, in addition to the bucket, the box sort.
About Stability:
Stable sorting algorithms: bubble sort, insert sort, merge sort, and Cardinal sort
Not a stable sorting algorithm: Select sort, quick sort, hill sort, heap sort
Reference:
Eight big algorithms
8 Large sorting algorithm graphic explanation