10 summary of the large sorting algorithm

Source: Internet
Author: User
Tags sorts

Objective

The search and sorting algorithm is an introduction to the algorithm, and its classical ideas can be used in many algorithms. The application is more common because of its short implementation code. So in the interview often ask the sorting algorithm and its related problems. But original aim, as long as familiar with the idea, flexible use is not difficult. The most common test in an interview is the quick sort and merge sort, and often the interviewer asks the field to write the two sort codes. These two sorts of code must be handy. There are insert sort, bubble sort, heap sort, base sort, bucket sort, etc. Interviewers may require comparisons of their merits, ideas of various algorithms, and their usage scenarios for these sorts. There is also the time and space complexity of the algorithm to be analyzed. Usually the search and sorting algorithm is the beginning of the interview, if these questions are not answered well, it is estimated that the interviewer did not continue to interview the interest is gone. So want to open a good head will be the common sorting algorithm thinking and its characteristics to master, it is necessary to skillfully write code.

Next we will analyze the common sorting algorithms and their usage scenarios. Limited to space, detailed demonstrations and illustrations of some algorithms should be found on their own for detailed reference.

Bubble sort

Bubble sort is one of the simplest sorts, and its general idea is to exchange small numbers to the front by comparing and exchanging with neighboring elements. The process is similar to a blister rising, hence the name. For a chestnut, bubble sort the unordered sequence of 5,3,8,6,4. First, from the back to bubble, 4 and 6 comparison, the 4 swap to the front, the sequence becomes 5,3,8,4,6. In the same vein 4 and 8 are exchanged, turning into 5,3,4,8,6,3 and 4 without swapping. 5 and 3 swap, turn into 3,5,4,8,6,3. This one bubble is over, and the smallest number 3 is in the front line. Bubbles the rest of the sequence in turn to get an ordered sequence. The time complexity of the bubble sort is O (n^2).

Implementation code:

/** * @Description:<p> bubble sorting algorithm implementation </p> * @author Wang Xu * @time 2016-3-3 pm 8:54:27 */public class Bubblesort {    public static void Bubblesort (int[] arr) {        if (arr = = NULL | | arr.length = = 0)            return;        for (int i=0, i<arr.length-1; i++) {for            (int j=arr.length-1; j>i; j--) {                if (Arr[j] < arr[j-1]) {                    swap (Arr, J-1, j);    }}} public static void Swap (int[] arr, int i, int j) {        int temp = arr[i];        Arr[i] = arr[j];        ARR[J] = temp;    }}
Select sort

The idea of sorting is actually a bit like bubble sort, which puts the smallest element first in a single order. But the process is different, and the bubbling sort is done by neighboring comparisons and exchanges. The choice of sorting is through the selection of the whole. For a chestnut, a simple sorting of the unordered sequence of 5,3,8,6,4, the first choice is to choose a minimum of 5 and a 5 exchange, that is to choose 3 and 5 exchanges, once the order becomes 3,5,8,6,4. Once the remaining sequences are selected and exchanged, an ordered sequence is eventually obtained. In fact, the choice of sorting can be considered as the optimization of the bubble sort, because the purpose is the same, but the choice of sorting only in the determination of the minimum number of the premise of the exchange, greatly reducing the number of exchanges. Select sort Time Complexity O (n^2)

Implementation code:

/** * @Description:<p> simple selection sorting algorithm implementation </p> * @author Wang Xu * @time 2016-3-3 pm 9:13:35 */public class Selectsort {    public static void Selectsort (int[] arr) {        if (arr = = NULL | | arr.length = = 0)            return;        int minindex = 0;        for (int i=0; i<arr.length-1; i++) {//only need to compare n-1 times            minindex = i;            for (int j=i+1; j<arr.length; J + +) {//compare from i+1, because minindex defaults to I, I do not need to compare.                if (Arr[j] < Arr[minindex]) {                    minindex = j;                }            }            if (minindex! = i) {//If Minindex is not I, the description finds a smaller value, which is exchanged.                swap (arr, I, minindex);    }}} public static void Swap (int[] arr, int i, int j) {        int temp = arr[i];        Arr[i] = arr[j];        ARR[J] = temp;    }}
Insert Sort

The insertion sort is not done by swapping positions but by comparing to find the appropriate location to insert elements to achieve the purpose of sorting. I believe we all have experience playing poker, especially the larger number of cards. In the sub-card may have to organize their own cards, more cards when how to organize it? is to get a card and find a suitable place to insert it. This principle is actually the same as the insertion sort. To give a chestnut, 5,3,8,6,4 this unordered sequence of simple insertion sort, first assume the first number of position when the correct, think about the first card in the time, there is no need to tidy up. Then 3 to be inserted in front of 5, the 5 is shifted to a bit, to become 3,5,8,6,4. Think of the time to organize the cards should be the same. Then 8 do not move, 6 is inserted in front of 8, 8 is shifted one bit, 4 is inserted in front of 5, and is shifted backwards from 5 onwards. Note that when inserting a number, make sure that the number in front of the number is already in order. The time complexity of simple insertion sorting is also O (n^2).

Implementation code:

/** * @Description:<p> simple insertion sorting algorithm implementation </p> * @author Wang Xu * @time 2016-3-3 pm 9:38:55 */public class Insertsort {    public static void Insertsort (int[] arr) {        if (arr = = NULL | | arr.length = = 0)            return;        for (int i=1; i<arr.length; i++) {//Assuming the first number of positions is correct; to move backwards, you must assume the first one.            int J = i;            int target = Arr[i]; Pending            //post-Move            while (J > 0 && Target < arr[j-1]) {                arr[j] = arr[j-1];                J--;            }            Insert             arr[j] = target;}}}    
Quick Sort

Fast sorting a name is very high-end, in the actual application of fast sorting is indeed the best performing sorting algorithm. Bubble sort Although high-end, but in fact its idea is from the bubble sort, bubble sort is through the comparison of adjacent elements and exchanges to the smallest bubble to the top, and fast sorting is to compare and exchange decimals and large numbers, so that not only the fractional bubbles to the top and also the large number sank below.

To give a chestnut: 5,3,8,6,4 This disorderly sequence of fast sorting, the idea is that the right pointer is smaller than the base number, the left pointer is larger than the base number, Exchange.

The 5,3,8,6,4 uses 5 as the benchmark for comparison, and eventually moves the 5 small to the left of 5, moving to the right of 5 than the 5 large.

5,3,8,6,4 first set the I,j two pointers to each end, and the J-pointer scans first (think about why?). ) 4:5 small Stop. Then I scan, 8:5 large stops. Swap the i,j position.

5,3,4,6,8 then the J-pointer then scans, then the J-Scan 4 o'clock two hands meet. Stop it. Then Exchange 4 and the base number.

4,3,5,6,8 once divided to reach the left than 5 small, right than 5 The purpose of the big. Then the left and right sub-sequences are recursively ordered, and the ordered sequence is finally obtained.

There's a question left on it. Why must the J-pointer move first? First of all, this is not absolute, depending on the position of the base number, because when the last two pointers meet, the base number is exchanged to the location of the meeting. Generally select the first number as the base number, then is on the left, so the last meeting of the number to and base number exchange, then meet the number must be smaller than the base number. So the J pointer moves first to find a number smaller than the base number.

Fast sequencing is unstable and its time average time complexity is O (NLGN).

Implementation code:

/** * @Description:<p> for fast sorting algorithm </p> * @author Wang Xu * @time 2016-3-3 pm 5:07:29 */public class QuickSort {//One Division        public static int partition (int[] arr, int. left, int. right) {int pivotkey = Arr[left];        int pivotpointer = left;            while (left < right) {while (left < right && Arr[right] >= PivotKey) right-;            while (left < right && Arr[left] <= pivotkey), left + +; Swap (arr, left, right);        Turn the big swap to the right and the small swap to the left. } swap (arr, pivotpointer, left);    Finally, the pivot is exchanged to the middle return left;        public static void QuickSort (int[] arr, int. left, int. right) {if (left >= right) return;        int pivotpos = partition (arr, left, right);        QuickSort (arr, left, pivotPos-1);    QuickSort (arr, pivotpos+1, right);        } public static void sort (int[] arr) {if (arr = = NULL | | arr.length = = 0) return; QuickSort (arr, 0, arr. length-1);        } public static void Swap (int[] arr, int left, int right) {int temp = Arr[left];        Arr[left] = Arr[right];    Arr[right] = temp; }}

In fact, the above code can be re-optimized, the above code in the base number has been saved in PivotKey, so do not need to set a temp variable each time, in the exchange of the left and right hand only need to be covered in succession. This reduces the amount of space used and the number of assignment operations that can be reduced. The optimization code is as follows:

/** * @Description:<p> for fast sorting algorithm </p> * @author Wang Xu * @time 2016-3-3 pm 5:07:29 */public class QuickSort {/** * Partition * @param arr * @param left * @param right * @return */public static int partition (int[] arr,        int left, int right) {int pivotkey = Arr[left];            while (left < right) {while (left < right && Arr[right] >= PivotKey) right-; Arr[left] = Arr[right];            Move the small to the left while (leave < right && Arr[left] <= pivotkey) Arr[right] = Arr[left]; Move the big to the right} Arr[left] = PivotKey;    Finally, Pivot is assigned to the middle return left; }/** * Recursively divides the subsequence * @param arr * @param left * @param right */public static void QuickSort (int[]        arr, int left, int.) {if (left >= right) return;        int pivotpos = partition (arr, left, right);        QuickSort (arr, left, pivotPos-1); QuickSort (ARR, Pivotpos+1, right);        } public static void sort (int[] arr) {if (arr = = NULL | | arr.length = = 0) return;    QuickSort (arr, 0, arr.length-1); }}

Summarize the idea of quick sorting: bubbling + two points + recursive division, slowly realize ...

Heap Sort

Heap sorting is the use of the heap to achieve the choice of sorting, thinking with a simple choice of sorting, the following large-top heap for example. Note: If you want to sort in ascending order, use the large top heap instead of the small top heap. The reason is that the heap top element needs to be swapped to the tail of the sequence.

First, the implementation of heap sequencing requires two issues to be resolved:

1. How to make a heap by an unordered sequence key?

2. How do I adjust the remaining elements to become a new heap after the top element of the output heap?

The first problem is that a linear array can be used to represent a heap, and a heap built from the initial unordered sequence needs to be adjusted from bottom to top from the first non-leaf element to one heap.

Second question, how to adjust piles? The first is to exchange the top and last elements of the heap. Then compare the current heap top elements of the left and right child nodes, because in addition to the current heap top elements, both left and right children heap meet the conditions, then need to select the current heap top element and left and right children node of the larger (Big Top heap) exchange, until the leaf node. We call this adjustment from the top of the heap from the leaves to be screened.

The process of building a heap from an unordered sequence is a process of repeated screening. If this sequence is considered to be a complete binary tree, then the last non-terminal node is a N/2 element, which can be filtered. Give me a chestnut:

The 49,38,65,97,76,13,27,49 sequence of heap sequencing builds the initial heap and adjusts the process as follows:

Implementation code:

/** * @Description The implementation of the:<p> heap sorting algorithm, taking the large top heap as an example.      </p> * @author Wang Xu * @time 2016-3-4 morning 9:26:02 */public class Heapsort {/** * heap filter, except for start, start~end satisfies the definition of the large top heap.     * Adjusted after Start~end called a large top heap. * @param arr to adjust array * @param start start pointer * @param end END pointer */public static void Heapadjust (int[] arr, int St        art, int end) {int temp = Arr[start]; for (int i=2*start+1; i<=end; i*=2) {///left and right child nodes are 2*i+1,2*i+2//select the smaller subscript if for children (I <             End && Arr[i] < arr[i+1]) {i + +;            if (temp >= arr[i]) {break;//is already a large top heap, = stability is maintained. } Arr[start] = Arr[i]; Move the child node up to start = i; Next round filter} Arr[start] = temp;        Insert the correct location} public static void Heapsort (int[] arr) {if (arr = = NULL | | arr.length = = 0) return;        Build a large top heap for (int i=arr.length/2; i>=0; i--) {heapadjust (arr, I, arr.length-1);   }     for (int i=arr.length-1; i>=0; i--) {swap (arr, 0, I);        Heapadjust (arr, 0, i-1);        }} public static void Swap (int[] arr, int i, int j) {int temp = Arr[i];        Arr[i] = Arr[j];    ARR[J] = temp; }}
Hill sort

Hill sort is an efficient implementation of the insertion sort, also known as narrowing the incremental sort. In a simple insert sort, if the backlog is a positive order, the time complexity is O (n), and if the sequence is basically ordered, the efficiency of using direct insertion sorting is very high. The hill sort took advantage of this feature. The basic idea is: the entire backlog of records to be divided into a number of sub-sequences of the direct insertion of the order, the whole sequence of records to be a basic order and then a direct insertion of the whole record to sort.

Give me a chestnut:

As can be seen from the sorting process above, the feature of the hill sort is that the composition of the subsequence is not a simple piecemeal segmentation, but a subsequence of records that are separated by an increment. As in the above example, the first table is sorted with an increment of 5, and the second to sort by an increment of 3. Because the keywords recorded in the insertion sort of the first two trips are compared to the keywords in the previous record in the same subsequence, the smaller-key records are not moved forward in step-by-step, but instead jump ahead, so that the entire sequence is basically orderly when the last trip is ordered, Just make a few comparisons and move the records. So the efficiency of the hill sort is higher than the direct insert sort.

The analysis of hill sequencing is complex, and time complexity is the function of the increment, which involves some mathematical puzzles. But on the basis of a large number of experiments, when n is within a certain range, the time complexity can reach O (n^1.3).

Implementation code:

/** * @Description:<p> Hill sorting algorithm implementation </p> * @author Wang Xu * @time 2016-3-3 pm 10:53:55 */public class Shellsort {    /**< c1/>* Hill sort of a trip insert     * @param arr queue array     * @param d increment *    /public static void Shellinsert (int[] arr, int d) {        for (int i=d; i<arr.length; i++) {            int j = i-d;            int temp = Arr[i];    Record the data to be inserted              while (j>=0 && arr[j]>temp) {  //From backward forward, find the position of the number smaller than its                   arr[j+d] = arr[j];    Nudge                  J-= D;              if (j! = i-d)    //exists smaller than its number                 arr[j+d] = temp;        }    }    public static void Shellsort (int[] arr) {        if (arr = = NULL | | arr.length = = 0)            return;        int d = ARR.LENGTH/2;        while (d >= 1) {            Shellinsert (arr, d);            D/= 2;}}}    
Merge sort

Merge sort is another different sort method, because the merge sort uses the idea of recursive division, so it is easier to understand. The basic idea is to recursively divide the sub-problem and then merge the results. The waiting sequence is considered by two ordered sub-sequences, then two sub-sequences are combined, then the sequence is regarded as two ordered sequences .... To look backwards, in fact, is the first 22 merger, and then 44 merger ... Eventually form an ordered sequence. The space complexity is O (n) and the time complexity is O (NLOGN).

Give me a chestnut:

Implementation code:

/** * @Description:<p> merge sorting algorithm implementation </p> * @author Wang Xu * @time 2016-3-4 morning 8:14:20 */public class MergeSort {public    static void MergeSort (int[] arr) {msort (arr, 0, arr.length-1); }/** * Recursive division * @param arr Queue array * @param left pointer * @param right pointer */public static void Msort (        Int[] arr, int left, int right) {if (left >= right) return;        int mid = (left + right)/2; Msort (arr, left, mid); Recursive sort left msort (arr, mid+1, right); Recursively sort to the right of the merge (arr, left, Mid, rights);     Merge}/** * Merge two ordered arrays * @param arr to merge arrays * @param left hand pointer * @param mid middle pointer * @param right pointer */public static void merge (int[] arr, int. left, int mid, Int. right) {//[left, Mid] [mid+1, right] I nt[] Temp = new Int[right-left + 1];        Intermediate array int i = left;        Int J = mid + 1;        int k = 0;   while (I <= mid && J <= right) {if (Arr[i] <= arr[j]) {             temp[k++] = arr[i++];            } else {temp[k++] = arr[j++];        }} while (I <= mid) {temp[k++] = arr[i++];        } while (J <= right) {temp[k++] = arr[j++];        } for (int p=0; p<temp.length; p++) {arr[left + p] = temp[p]; }    }}
Count sort

If an interviewer asks you to write an O (n) time complexity sorting algorithm, you must not immediately say: this is impossible! Although the lower bound of the previous comparison-based sort is O (Nlogn). But there are also linear time complexity of the ordering, but there is a precondition, is to be sorted to meet a certain range of the number of integers, and the counting order needs more auxiliary space. The basic idea is to count the number of each number as the subscript of the counting array with the numbers to be sorted. Then the output sequence can be ordered sequentially.

Implementation code:

/** * @Description:<p> counting sorting algorithm implementation </p> * @author Wang Xu * @time 2016-3-4 pm 4:52:02 */public class Countsort {    public static void Countsort (int[] arr) {        if (arr = = NULL | | arr.length = = 0)            return;        int max = max (arr);        Int[] count = new int[max+1];        Arrays.fill (count, 0);        for (int i=0; i<arr.length; i++) {            Count[arr[i]] + +;        }        int k = 0;        for (int i=0, i<=max; i++) {for            (int j=0; j<count[i]; J + +) {                arr[k++] = i;}}    }    public static int Max (int[] arr) {        int max = Integer.min_value;        for (int ele:arr) {            if (Ele > Max)                max = ele;        }        return max;}    }
Bucket sort

Bucket sorting is a kind of improvement and generalization of counting sort, but there are many data on the net to confuse counting sort and bucket sort. In fact, the bucket sort is much more complicated than the counting sort.

Analysis and explanation of the sorting of buckets draw on this brother's article (with changes): http://hxraid.iteye.com/blog/647759

The basic idea of bucket sequencing:

Suppose there is a set of pending keyword sequences of length n K[1....N]. This sequence is first divided into M-sub-intervals (buckets). Then, based on some kind of mapping function, the keyword K of the pending sequence is mapped into the first bucket (that is, the subscript I of bucket array b), then the keyword K is used as an element in b[i] (each bucket b[i) is a set of n/m series). Then you sort all the elements in each bucket b[i] (you can use a quick row). The entire contents of the output B[0]....B[M] are then enumerated sequentially as an ordered sequence. Bindex=f (key) where Bindex is the subscript of the bucket array B (that is, the Bindex bucket), and k is the keyword for the sequence to be ordered. The key to the efficiency of bucket sequencing is the mapping function, which must do this: if the keyword K1<K2, then f (K1) <=f (K2). This means that the smallest data in B (i) is greater than the largest data in B (i-1). Obviously, the determination of mapping function has a great relationship with the characteristics of the data itself.

Give me a chestnut:

If the pending sequence k= {49, 38, 35, 97, 76, 73, 27, 49}. This data is all between 1-100. So we customize 10 buckets and then determine the mapping function f (k) =k/10. The first keyword 49 will be positioned in the 4th bucket (49/10=4). All of the keywords are then piled into the bucket in turn and are quickly sorted in each non-empty bucket. An ordered sequence can be obtained as long as the data in each b[i] is output sequentially.

Bucket sequencing Analysis:

Bucket sequencing uses the mapping of functions to reduce almost all of the comparison work. In fact, the calculation of the F (k) value of the bucket order is equivalent to the sub-problem in the Fast row division, the sub-sequence in the hill sort, and the merging order, and the large amount of data has been divided into the basic ordered data block (bucket). Then only a small amount of data in the bucket can be compared to the advanced sorting.

The time complexity of sorting n keywords into buckets is divided into two parts:

(1) Loop calculates the bucket mapping function for each keyword, which is an O (N) time complexity.

(2) using advanced comparison sorting algorithm to sort all the data in each bucket, the time complexity is ∑o (Ni*logni). The amount of data in which NI is the first bucket.

It is clear that part (2) is the determinant of the good or bad barrel ordering. Minimizing the amount of data in the bucket is the only way to improve efficiency (because the best average time complexity based on comparison sorting can only reach O (N*logn)). Therefore, we need to try to do the following two points:

(1) The Mapping function f (k) is able to evenly distribute N data into M-buckets, so that each bucket has [n/m] data volume.

(2) Increase the number of barrels as much as possible. In extreme cases, each bucket can only get one data, which completely avoids the "compare" sort operation of the data in the bucket. Of course, it is not easy to do this, the large amount of data, the F (k) function will make the bucket collection of large quantities, space waste serious. This is a tradeoff between the time cost and the space cost.

For N rows of data, M buckets, the average time complexity of the bucket ordering per bucket [n/m] data is:

O (N) +o (m* (n/m) *log (n/m)) =o (n+n* (LOGN-LOGM)) =o (N+N*LOGN-N*LOGM)

When N=m is the case, there is only one data per bucket at the limit. The best efficiency of bucket sequencing can be achieved by O (N).

Summary: The average time complexity for bucket sequencing is linear O (n+c), where c=n* (LOGN-LOGM). If the number of barrels is greater than the same N, the higher the efficiency, the better the time complexity to O (n). Of course, the spatial complexity of the bucket ordering is O (n+m), if the input data is very large, and the number of barrels is also very high, then the space cost is undoubtedly expensive. In addition, the bucket sort is stable.

Implementation code:

/** * @Description:<p> bucket sorting algorithm implementation </p> * @author Wang Xu * @time 2016-3-4 pm 7:39:31 */public class Bucketsort {public S        tatic void Bucketsort (int[] arr) {if (arr = = NULL && Arr.length = = 0) return; int bucketnums = 10; The default is 10, which specifies the number of rows to be ranked [0,100] list<list<integer>> buckets = new arraylist<list<integer>> ();        The index of the bucket for (int i=0; i<10; i++) {buckets.add (New linkedlist<integer> ());//With linked list more appropriate}        Divide bucket for (int i=0; i<arr.length; i++) {buckets.get (f (arr[i])). Add (Arr[i]);                }//Sort each bucket for (int i=0; i<buckets.size (); i++) {if (!buckets.get (i). IsEmpty ()) { Collections.sort (Buckets.get (i));        Fast row for each bucket}}//restore ordered array int k = 0;            for (list<integer> bucket:buckets) {for (int ele:bucket) {arr[k++] = ele; }}}/** * Mapping function * @pAram X * @return */public static int f (int x) {return X/10; }}
Base sort

The cardinality sort is a sort of different order from the previous sort, and the cardinality sort does not need to be compared between the record keys. Radix sorting is a method to sort single-logical keywords with multi-keyword ordering. The so-called multi-keyword ordering is that there are multiple keywords with different priority levels. For example, the ranking of grades, if the two people the same total, then the language high in the front, the same language scores are high math in front of the row ... If the numbers are sorted, then the digits, 10 bits, and hundreds are the keywords of different priorities, and if you want to sort in ascending order, then the number of digits, 10 digits, and the hundred priority will increase once. Base sorting is achieved by multiple collections and collections, and the keyword priority is assigned and collected first.

Give me a chestnut:

Implementation code:

/** * @Description:<p> base sorting algorithm implementation </p> * @author Wang Xu * @time 2016-3-4 pm 8:29:52 */public class Radixsort {public S        tatic void Radixsort (int[] arr) {if (arr = = NULL && Arr.length = = 0) return;        int maxbit = Getmaxbit (arr); for (int i=1; i<=maxbit; i++) {list<list<integer>> buf = distribute (arr, i);//assigning Col Lecte (arr, buf); Collect}}/** * Assign * @param arr to allocate array * @param ibit to assign the number of * @return */public static L Ist<list<integer>> Distribute (int[] arr, int ibit) {list<list<integer>> buf = new ArrayLis        T<list<integer>> ();        for (int j=0; j<10; J + +) {Buf.add (New linkedlist<integer> ());        } for (int i=0; i<arr.length; i++) {buf.get (Getnbit (Arr[i], ibit)). Add (Arr[i]);    } return BUF; }/** * Collection * @param arr collects assigned data into ARR * @param buf */public statIC void Collecte (int[] arr, list<list<integer>> buf) {int k = 0;            for (list<integer> bucket:buf) {for (int ele:bucket) {arr[k++] = ele;        }}}/** * Gets the maximum number of digits * @param x * @return */public static int getmaxbit (int[] arr) {        int max = Integer.min_value;            for (int ele:arr) {int len = (ele+ ""). Length ();        if (Len > max) max = Len;    } return max;     }/** * Gets the nth bit of x, if none is 0.        * @param x * @param n * @return */public static int getnbit (int x, int n) {String SX = x + "";        if (Sx.length () < n) return 0;    else return Sx.charat (Sx.length ()-N)-' 0 '; }}
Summarize

In the previous introduction and analysis we mentioned bubble sort, select sort, insert sort three simple sort and its variants quick sort, heap sort, hill sort three more efficient sort. In the following, we analyze the merging sort based on the recursive thought of divide and conquer, and the three linear sorts of counting sort, bucket sort and cardinal sort. We can know that the sorting algorithm is either simple or efficient, either by using the characteristics of a simple sort or by exchanging space for time in a particular case of efficient sequencing. However, these sorting methods are not fixed, and need to be combined with specific requirements and scenarios to select and even combine them. To achieve efficient and stable purposes. There is no best sort, only the most suitable sort.

The following is a summary of the sorting algorithm of the respective usage scenarios and applicable occasions.

1. From the average time, fast sorting is the most efficient, but fast sorting in the worst-case time performance is not as good as heap sorting and merge sorting. Compared with the latter, the result is that the merge sort uses less time when n is larger, but uses more auxiliary space.

2. The simple sort mentioned above includes all bubble sorting except hill sort, insert sort, simple select sort. In which the direct insertion sort is the simplest, but the sequence is basically ordered or n is small, the direct insertion is a good method, so it is often used in conjunction with other sorting methods such as quick sort, merge sort, etc.

3. The time complexity of radix sequencing can also be written in O (d*n). Therefore it is most useful for sequences with large n values and smaller keywords. If the keywords are also large, and most of the records in the sequence have different top keywords, you can also break the sequence into several small sub-sequences by the highest keyword, and then insert the order directly.

4. Comparing the stability of the method, the Cardinal sort is a stable inner row method, and the simple ordering of all time complexity O (n^2) is also stable. But fast sorting, heap sorting, hill sorting and other time-performance-better sorting methods are unstable. Stability needs to be selected according to specific requirements.

5. Most of the above algorithm implementations use linear storage structures, such as insertion sorting, which is better with a linked list, eliminating the time to move elements. The specific storage structure is different in the specific implementation version.

Attached: Based on the comparison sorting algorithm the minimum time limit is O (nlogn) Proof:

The proof of the lower order based on the comparison is proved by the decision tree, the height ω (nlgn) of the decision tree, and the lower limit of the comparison sort is obtained.

The first step is to introduce a decision tree. First, the decision tree is a binary tree, each node represents a set of possible sequences between elements, which is consistent with the comparison of Beijing, the result of comparison is the edge of the tree. First of all, to illustrate the nature of some binary trees, so t is a depth of D two fork tree, then T has 2^ leaf. The depth of a two-prong tree with L-leaf leaves is at least logl. Therefore, the decision tree ordering n elements must have n! leaf (because n number has n! different size relationship), so the decision tree depth is at least log (n!), that is, the need for at least log (n!) Comparison. and log (n!) =logn+log (n-1) +log (n-2) +...+log2+log1 >=logn+log (n-1) +log (n-2) +...+log (N/2) >= (N/2) log (N/2) >= (N/2) LOGN-N/2 = O (Nlogn) so only the comparison of the sorting algorithm is the minimum time complexity of O (Nlogn).

Resources:

    • "Data Structure" Min 聯繫 authoring
    • Bucket sequencing Analysis: http://hxraid.iteye.com/blog/647759
    • Partial sorting algorithm analysis and introduction: http://www.cnblogs.com/weixliu/archive/2012/12/23/2829671.html

10 summary of the large sorting algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.