Sorting algorithmSorting by CountingSort by countComparison CountingAssume that... n] array for sorting, using an array count [1... n] to calculate the position where each number should appear, 1 Set count [1... n] = 0; 2 for I = N; I> 1; I -- 2 for j = I-1; j> = 1; j --; if A [j]> A [I] count [j] ++; else count [I] ++; for example, after the first statistics of array 2 3 4 1 5, count is: 0 0 0 0 4 after the second statistical count: 1 1 1 0 4 after the third statistical count: 1 1 3 0 4 after the fourth statistical count: 1 2 3 0 4 the final count corresponds to the value of its appearance. Obviously, the time complexity of this algorithm is O (n ^ 2), and the space complexity is O (n ), bad algorithms are useless, but do not need to move recordsDistribution CountingThis method is an improvement on the above method. Assume that the maximum and minimum values of array A are min max, and set A count [min... max] array to calculate the position where each number should appear 1 Set count [min... max] = 0; 2 for (I = 1; I <= N; I ++) count [A [I] + = 1; 3 for (I = min + 1; I <= max; I ++) count [I] = count [I] + count [I-1]; this method does not require any comparison, A bit like the Radix Sorting mentioned laterSorting by InsertionInsert sorting. The basic idea of this algorithm is similar to the card inserting method when we play cards. When we want to receive the card I, the I-1 card on our hand is already in order, so we find the right place to put the card I.Straight insertion insert directlyFor (I = 2; I <= N; I ++) Te = A [I]; for (j = I-1; j> = 1; j --) if (A [j]> A [I]) A [j + 1] = A [j]; else break; A [j + 1] = tE; the worst case of this algorithm is that the input array is in reverse order, and the number of comparisons and moves is 1 + 2 +... + N-1 = N * (N-1)/2 O (n ^ 2) average down, for I, each comparison and movement times is I/2, the total number of times is N * (N-1)/4 is about (N ^ 2)/4 the algorithm in N is not very large, and the array is basically ordered, therefore, many sorting algorithms usually use this algorithm to partially sort data.Binary Insertion and two-way InsertionFor the number of I, because the number of the front I-1 is ordered, we need to find the appropriate position to insert I, so we can use binary lookup to find the right location in the I-1, for example for I = 64, then compare A [32], if the value is greater than A [64], compare A [16]; otherwise compare A [48]. In this way, you only need to perform log2N comparisons to find the corresponding Insert Location. However, even if we find A suitable Insert Location, we still need to move the subsequent records to A location to free up space for the number A [I] to be inserted. therefore, the overall time complexity is not reduced.Shell Sorting Hill SortingIf a sorting algorithm moves only one location at a time, the time complexity is O (n). Therefore, if we want to improve the efficiency, each comparison cannot only move one location, move as many positions as possible. Hill sorting is an algorithm called Diminishing increment sortion. For example, for 16 records, we divide them into 8 groups (A1, A9) (A2, A10 )... (A8, A16) Sort each group and divide them into four groups: A1, A5, A9, A 13 )... (A4, A8, A12, and A16) Sort each group and divide them into two groups, sort each group, sort the entire array, and sort each group by insert. You can select a group based on different data, but it must be 1 at the end. For example, 8 4 2 1 can also be divided into 7 5 3 1. // H is the group numberShellSort (array A, int N, int h) {for (I = 1; I Sorting by ExchangingThe basic idea of exchange sorting is to interact with the number of out of order until there is no out of order in the column to be sorted and insert directly into the order. It can also be seen as exchange sorting: for each time A [I] compares it with its neighbor, if it is not an order, it is exchanged until it is switched to A suitable position. Therefore, there is no strict distinction between insertion, switching, and selection sorting. Exchange selection (bubble sort bubble Method) merge exchage (merge sort) partition exchange (fast sort) radix exchange (base sort)Bubble sortThe most obvious method is the comparison between A1 and A2. If A1 is large, the exchange will continue to compare A2 A3 and A3 A4. After one time, the largest element will be on the rightmost side. Next, perform the for (I = 1; I <N; I ++) {for (j = 1; j <= N-I; j ++) operation on the elements of the N-1) {if (A [j]> A [j + 1]) {exchange (A [j], A [j + 1 );}}} here is the most simple implementation method, obviously this method has a lot of improvements, for example, not necessarily have to cycle N-1 times, if a cycle found no exchange, it indicates that the sequence is already ordered and can be jumped out. For example, it is not necessary to loop to N-I each time, instead, we can loop through the subscript of the last exchange operation. Because exchange is not performed later, it indicates that the subsequent operation is already orderly. Exchange_bound = N-1; is_exchange = 0; for (I = 1; I <N; I ++) {for (j = 1; j <= exchange_bound; j ++) {if (A [j]> A [j + 1]) {exchange (A [j], A [j + 1); is_exchage ++; exchange_bound = j ;}} if (is_exchange = 0) break ;}Quick sortingThe basic idea of quick sorting is to use partition. partition selects a value from the column to be sorted, and then divides it according to this value. The number on the left is smaller than this value, the numbers on the right are greater than this value, and the numbers on the left and right are further divided. Partition (array, low, high) {if (array = NULL) return-1; if (low> = high) return low; Te = array [low]; I = low; j = high; while (true) {while (array [j]> = tE & I <j) j --; while (array [I] <= primary te & I <j) I ++; if (I <j) exchange (array [I], array [j]; else {exchange (array [low], array [j]); return j ;}} another method is to find partition (array, low, high) at one end) {if (array = NULL) return-1; if (low> = hig H) return low; I = low-1; j = low; while (j Radix Exchanging sorting base exchange sortingThe base exchange sorting uses the binary representation of each number for sorting. The same general comparison is different. It is compared by the corresponding binary bit 0 or 1. It is a bit similar to fast sorting. The process is: 1. Divide the sequence by the highest bit. The number on the left is 0, the number on the right is 1.2, and the new sequence with the highest bit as 0 is further divided by the next bit as 0 or 1. The same is true for the sequence on the right, until the last bit finally gets an ordered sequence. RadixPartition (array, low, high, bit) {if (array = NULL) return-1; if (low> = high) return low; I = low; j = high; while (true) {while (array [j] & B & I <j) j --; while (! (Array [I] & B) & I <j) I ++; if (I <j) exchange (array [I], array [j]; else {return j ;}}radixsort (array, low, high, bit) {if (array = NULL | low> = high) return; mid = radixPartition (array, low, high, bit); bit = bit> 1; radixSort (array, low, mid, bit); radixSort (array, mid + 1, high, bit );} the time complexity of base sorting is the same as that of quick sorting, But it is said to be a little faster than quick sorting.Sorting By SelectionAnother important method in sorting algorithms is to select and use continuous selection to obtain the final ordered sequence. The simplest method is generally: 1. Find the smallest number from the sequence, output, set this to infinity; 2. Continue Step 13 until N records have been selected and all the data to be sorted is known, the final result is accumulated at a time, and the other sorting methods mentioned above, in the middle of sorting, we can never get an ordered sequence, it is not until the sorting is completed that an ordered sequence is obtained, but the results generated in the sorting process are all sorted. Each time the above method finds a minimum record, it requires a N-1 comparison and a space of N sizes to store the final result. Obviously we can improve it, instead of using an infinite value, you only need to place the value to the appropriate position in the original array after finding a minimum value.Straight Selection Sort directly select sortingStrainghtSort (array, low, high) {iMin = low; for (I = low; I Tree SelectionThe basic idea of Tree Selection is to establish a binary Tree by comparing the winners in two ways. It is the same as the method used to select the champion during the competition, as shown in 6 3 6 1 3 6 4 4 after obtaining the Tree, we can output the root node, which is the maximum value, set its location to the minimum value, and re-calculate its path to obtain the second largest number, only Log2N comparisons are required. 4 3 4 1 3 0 4 but the algorithm requires extra space to store this tree and the negative infinity is used to replace the output elements. Is there a good solution to these problems?Heapsort heap sortingHeap sorting directly uses arrays to store trees. For node I, the left and right children of node I are 2 * I and 2 * I + 1 respectively, and their parent node is I/2 and the lower limit is obtained. For elements in an array, if Ai <Ai * 2 and Ai <Ai * 2 + 1, we call it a small top heap, that is, the root node is the smallest element. If we can convert an array into a heap, we canTree selectionThe method returns an ordered sequence from top to bottom. Heap Sorting Algorithm: First, we convert an array into a heap, then move the heap top element to the appropriate one, and re-adjust the remaining array to the new heap until all elements are obtained. HeapAdjust (array, I, N) {l_child = 2 * I; r_child = 2 * I + 1; largest = I; if (l_child <= N & array [l_child]> array [I]) largest = l_child; if (r_child <= N & array [r_child]> array [largest]) largest = r_child; if (largest! = I) exchange (array [largest], array [I]); HeapAdjust (array, largest, N);} HeapSort (array, N) {for (I = N/2; I> = 0; I --) HeapAdjust (array, I, N); for (I = N; I> 1; I --) exchange (array [1], array [I]); HeapAdjust (array, 1, I-1);} its time complexity is O (N * log2N ), N is very efficient, but when N is very small, the performance of this algorithm is not good, because it is time-consuming to build a heap for the first time, you only need to compare the height of the h tree at most to adjust the heap by searching for elements later. Compared with fast sorting, in the worst case, fast sorting is not as good as heap sorting. The worst case of heap sorting is slightly different from that of average, but on average, fast sorting is much better than heap sorting. However, Largest in, first out is an interesting phenomenon in heap sorting. Therefore, heap sorting can be used to implement Priority Queue. For example, if the operating system schedules the Priority of a process, the Priority of the process may change constantly, however, each departure process always has the highest priority. To implement a priority queue, you must add modifications and insert operations in addition to the preceding operations.Sorting By MergingMerging combines two or more ordered arrays into an ordered array. Two-way Merge algorithm: Merge (arraya, m, arrayb, n) {while (I <= m & j <= n) if (arraya [I] <arrayb [j]) newArray [k ++] = arraya [I]; I ++; else newArray [k ++] = arrayb [j]; j ++; while (I <= m) newArray [k ++] = arraya [I]; while (j <= n) newArray [k ++] = arrayb [j]; return newArray;} Obviously, the above algorithm needs to be compared at most m + n times. Therefore, we can continuously divide an array into multiple small arrays, sort the small arrays, and combine them into a large array until all the arrays are merged. In addition, we can also introduce direct insertion sorting to sort small arrays first and then merge them up.Sorting By DistributionRadix Sorting base SortingBase sorting is to sort a ranking column with multiple keywords. For example, we sort playing cards. Playing cards have two keywords: Color and face value, then, the base sorting is to first divide the keywords with lower priority into several heaps, then build these heaps into a heap, and then convert the heap into N New heaps based on higher priority, build these heaps into a heap until all the keywords are completed, and the final result is ordered. For example, for an array: 329 457 657 839 436 720 355, we can think that it has three keywords: bits, 10 bits, and single BITs, which have the lowest priority, hundreds of BITs are first combined for single-digit heap 720 355 436 457 329 657 839 0 5 6 7 9: 720 355 436 457 657 329 839 continue to merge the ten-minute-heap 720 436 355 329 839 457 2 3 5: 720 329 436 839 355 457 continue to merge 657 329 436 657 720 839355 457 3 4 6 7 8 data: 329 355 436 457 657 720 839 a sequence is obtained, and no comparative operation is required throughout the process. You only need to add each number to the corresponding heap. Suppose there are N numbers and M heaps, then each heap needs at least N spaces to ensure it can accommodate all the possible numbers, so that the total extra space is required (M + 1) * N, such overhead is unacceptable in any case. Then the algorithm is improved. Only 2N spaces and M counters are required. The principle is that the first scan counts the number of records on each stack, in this way, we will know exactly how much space each heap needs to allocate to accommodate these records. Therefore, the basic algorithm for base sorting is to move records to extra space using the keyword with the smallest priority, then, use a higher-level keyword to move the record from extra space to the original space ...... Until all keywords are counted and each move requires three steps: Count, allocate space, and move. References: The Art of Computer Programming vol 3 Sorting and SearchingIntroduction to algorithm