Sorting algorithms can be divided into internal sorting and external sorting. Internal sorting means that data records are sorted in the memory, while external sorting means that the sorting data is large and cannot accommodate all sorting records at a time, you need to access external storage during sorting.

Common internal sorting algorithms include insert sorting, Hill sorting, select sorting, Bubble sorting, Merge Sorting, fast sorting, heap sorting, and base sorting.

This article will introduce the above eight sorting algorithms in turn.

**Algorithm 1: insert sorting**

Insert sort

Insert sorting is the simplest and most intuitive sorting algorithm. It works by constructing an ordered sequence. For unordered data, It scans the sorted sequence from the back to the front, locate the corresponding location and insert it.

Algorithm steps:

1) Think of the first element of the first to be sorted sequence as an ordered sequence, and treat the second element to the last element as an unordered sequence.

2) scan the unordered sequence from start to end, and insert each element into the appropriate position of the ordered sequence. (If the element to be inserted is equal to an element in an ordered sequence, the element to be inserted is inserted after the same element .)

**Algorithm 2: Hill sorting**

Hill sorting

Hill sorting, also known as the descending incremental sorting algorithm, is a more efficient improved version of insert sorting. However, Hill sorting is a non-stable sorting algorithm.

Hill sorting proposes an improvement method based on the following two attributes of insert sorting:

Insert sorting is highly efficient when performing operations on data in almost sorted order, that is, linear sorting can be achieved.

However, insert sorting is generally inefficient, because insert sorting can only move one bit of data at a time.

The basic idea of hill sorting is: first, the whole sequence of records to be sorted is divided into several subsequences for direct insertion and sorting. When the records in the whole sequence are "basic order, then, all records are inserted and sorted in sequence.

Algorithm steps:

1) Select an incremental sequence T1, T2 ,..., TK, where Ti> TJ, TK = 1;

2) sort the sequence K by the number of incremental sequences;

3) Sort each row. Based on the corresponding incremental Ti, separate the columns to be sorted into several sub-sequences with a length of M, and insert and sort the sub-tables directly. When the increment factor is 1, the whole sequence is processed as a table. The table length is the length of the whole sequence.

**Algorithm 3: Select sorting**

Select sort

Selection sort is also a simple and intuitive sorting algorithm.

Algorithm steps:

1) First, find the smallest (large) element in the unordered sequence and store it to the starting position of the sorting sequence.

2) Search for the smallest (large) element from the remaining unordered elements and put it at the end of the sorted sequence.

3) Repeat Step 2 until all elements are sorted.

**Algorithm 4: Bubble Sorting**

Bubble Sorting

Bubble sort is also a simple and intuitive sorting algorithm. It repeatedly visits the series to be sorted, compares two elements at a time, and exchanges them if their order is wrong. The work of visiting a sequence is repeated until there is no need for exchange, that is, the sequence has been sorted. The name of this algorithm comes from because the smaller elements will slowly "float" to the top of the series through the exchange.

Algorithm steps:

1) Compare adjacent elements. If the first is bigger than the second, exchange the two of them.

2) perform the same operation on each adjacent element, starting from the first pair to the last one. After this step is completed, the final element will be the largest number.

3) Repeat the preceding steps for all elements except the last one.

4) continue to repeat the above steps for fewer and fewer elements until there is no need to compare any number.

**Algorithm 5: Merge Sorting**

Merge Sorting

Merge sort is an effective Sorting Algorithm Based on the merge operation. This algorithm is a very typical application of divide and conquer.

Algorithm steps:

1. Apply for a space so that the size is the sum of the two sorted sequences. This space is used to store the merged sequences.

2. Set two pointers. The initial position is the start position of the two sorted sequences respectively.

3. Compare the elements pointed to by the two pointers, select a relatively small element, and move the pointer to the next position.

4. Repeat Step 3 until a pointer reaches the end of the sequence.

5. Copy all the remaining elements of another sequence directly to the end of the merging sequence.

Details: Merge Sorting

**Algorithm 6: Fast sorting**

Quick sorting

Fast sorting is a sort algorithm developed by Tony Hall. On average, sort (n log n) times to compare n projects. In the worst case, compare (N2) times, but this is not common. In fact, fast sorting is usually much faster than other sort (n log n) algorithms because its inner loop can be implemented efficiently in most architectures.

Quick sorting uses the divide and conquer policy to divide a serial (list) into two sub-serial (sub-lists ).

Algorithm steps:

1. Pick out an element from the sequence, which is called a benchmark ),

2. Re-sort the series. All elements are placed before the benchmark values smaller than the benchmark values, and all elements are placed behind the benchmark values larger than the benchmark values (the same number can come to either side ). After the partition exits, the benchmark is in the middle of the series. This is called a partition operation.

3. recursively sort the subseries smaller than the reference value element and the subseries greater than the reference value element.

The bottom of recursion is that the number of columns is zero or one, that is, they are always sorted. Although this algorithm is always recursive, it always exits, because in each iteration, it will at least place an element at its final position.

Detailed introduction: quick sorting

**Algorithm 7: heap sorting**

Heap sorting

Heapsort is a sort algorithm designed by using the data structure of heap. Accumulation is a structure that is similar to a Complete Binary Tree and meets the accumulation nature: that is, the key value or index of a child node is always smaller than (or greater than) its parent node.

The average time complexity of heap sorting is commit (nlogn ).

Algorithm steps:

1) create a heap H [0 .. n-1]

2) swap the beginning (maximum) and end of the heap

3) Reduce the heap size by 1 and call shift_down (0) to adjust the top data of the new array to the corresponding position.

4) Repeat Step 2 until the heap size is 1.

Details: heap sorting

**Algorithm 8: Base sorting**

Base sorting is a non-Comparative integer Sorting Algorithm. The principle is to cut an integer into different digits by the number of digits and then compare them by each digit. Since integers can also express strings (such as names or dates) and floating point numbers in specific formats, the base sorting is not only applicable to integers.

Before speaking about base sorting, we will briefly introduce the bucket sorting:

Algorithm idea: divides arrays into a limited number of buckets. Each bucket can be sorted individually (it is possible to use another sort algorithm or to continue to use the bucket sort in a progressive manner ). Bucket sorting is an inductive result of nest sorting. When the values in the array to be sorted are evenly distributed, the linear time (n) is used for Bucket sorting )). However, bucket sorting is not a comparative sorting, and is not affected by the lower limit of O (n log n.

Simply put, data is grouped into buckets and sorted in each bucket.

For example, you want to sort n integers A [1. N] in the range [1. 1000 ].

First, you can set the bucket size to 10. Specifically, set Set B [1] to store [1 .. the integer of 10], which is a set of B [2] storage (10 .. 20] integer ,...... Set the integer of B [I] storage (I-1) * 10, I * 10], I = 100. A total of 100 buckets are available.

Then, scan a [1. N] From start to end and put each a [I] in the corresponding bucket B [J. Sort the numbers in each of these 100 buckets. In this case, you can use bubble, select, or even fast sorting. Generally, any sorting method is acceptable.

Finally, the numbers in each bucket are output sequentially, and the numbers in each bucket are output from small to large. In this way, a sequence of all numbers is obtained.

Suppose there are n numbers and M buckets. If the numbers are evenly distributed, there are N/m numbers on average in each bucket. If

The complexity of the entire algorithm is

O (N + M * N/m * log (N/m) = O (N + nlogn-nlogm)

From the above formula, when m is close to N, the complexity of Bucket sorting is close to O (n)

Of course, the above complexity calculation is based on the assumption that N numbers are evenly distributed. This assumption is very strong, and the results in practical applications are not so good. If all the numbers are in the same bucket, the sorting will degrade.

As mentioned above, most of the time complexity of sorting algorithms is O (n2), and some sorting algorithms are O (nlogn ). However, bucket sorting can achieve O (n) time complexity. However, the disadvantage of Bucket sorting is:

1) The first is the high space complexity and the extra overhead required. There is a space overhead for sorting two arrays. One is to store the array to be sorted, and the other is the so-called bucket. For example, if the value to be sorted is from 0 m-1, M buckets are required, this bucket array requires at least m space.

2) The elements to be sorted must be within a certain range.

Summary

The stability, time complexity, space complexity, and stability of various sorting types are summarized as follows:

Time Complexity:

(1) Order of squares (O (n2)

Various types of simple sorting: Direct insertion, Direct selection, and Bubble Sorting;

(2) linear rank (O (nlog2n)

Fast sorting, heap sorting, and Merge Sorting;

(3) O (N1 + §) sorting. § is a constant between 0 and 1.

Hill sorting

(4) linear order (O (N) sorting

Base sorting, in addition to bucket and box sorting.

Stability:

Stable sorting algorithms: Bubble sorting, insert sorting, Merge Sorting, and base sorting

Not a stable Sorting Algorithm: Select sorting, fast sorting, Hill sorting, and heap sorting