Eight sorting algorithms

Source: Internet
Author: User
Tags repetition

Read Catalogue

    • 1. Direct Insert sort (straight insertion sort)
    • 2. Hill sort (shells sort)
    • 3. Direct Select sort (straight Selection sort)
    • 4. Heap sort (heap sort)
    • 5. Bubble sort (Bubble sort)
    • 6. Fast sorting (Quick sort)
    • 7. Merge sort
    • 8. Bucket sort (bucket sort)/base sort (Radix sort)
    • 9. Performance comparison of various sorting algorithms

Sorting has internal and external sorting, the internal sort is the data records are sorted in memory, and the external sort is because the sorted data is large, cannot hold all the sort records at once, and needs to access external memory during sorting. The eight sorting algorithms we're talking about here are internally sorted.

For the sorting algorithm architecture diagram:

Back to top of 1. Direct Insert sort (straight insertion sort)

  Basic idea: The ordered sequence number is considered as an ordered sequence with only one element and an unordered sequence, and the elements in the unordered sequence are inserted sequentially into the ordered series, thus obtaining the final ordered sequence number.

  Algorithm Flow:

1) at the beginning, a[0] becomes an orderly area, the disordered area is a[1, ..., n-1], so i=1;

2) Merge A[i] into the current orderly zone a[0, ..., i-1];

3) i++ and repeat 2) until i=n-1, sort done.

  Time complexity: O (n^2).

  : initial unordered sequence of 49, 38, 65, 97, 76, 13, 27, 49

  Note: If you meet an element that is equal to the insert, the insertion element places the element you want to insert behind the equal element. So, the order of the equal elements is not changed, the order from the original unordered sequence is the order of the sequence, so the insertion sort is stable.

C + + Implementation Source:

Direct Insert Sort, version 1void StraightInsertionSort1 (int a[], int n) {    int i, j, K;    for (I=1; i<n; i++)    {        //Find the location to be inserted for        (j=0; j<i; j + +)            if (A[i] < a[j]) break                ;        Insert, and then move the remaining elements        if (j! = i)        {            int temp = a[i];            for (int k=i-1; k>=j; k--)                a[k+1] = a[k];            A[J] = temp;        }    }    Printdataarray (A, n);}

Two simplified versions, recommended for the third version.

Direct Insert method, version 2: Search and move back at the same time void StraightInsertionSort2 (int a[], int n) {    int i, j, K;    for (I=1; i<n; i++)        if (A[i] < a[i-1])        {            int temp = a[i];            for (j=i-1; j>=0 && a[j]>temp; j--)                a[j+1] = a[j];            A[J+1] = temp;        }    Printdataarray (A, n);} Insert Sort, version 3: Move back with data exchange instead of version 2 (compare objects only consider two elements) void StraightInsertionSort3 (int a[], int n) {for    (int i=1; i<n; i++) for        (int j=i-1; j>=0 && a[j]>a[j+1]; j--)            Swap (A[j], a[j+1]);    Printdataarray (A, n);}
Back to top of 2. Hill sort (Shells sort)

The hill sort was proposed by D.l.shell in 1959, and the relative direct ranking has been greatly improved. Hill sort is also called narrowing the incremental sort

basic idea: first, the entire sequence of records to be sorted into a number of sub-sequences of the direct insertion of the order, the entire series of records in the "Basic order", then the whole record is inserted in order.

Algorithm Flow:

  1) Select an incremental sequence T1,T2,...,TK, where ti>tj,tk=1;

2) According to the number of increment series K, the sequence of K-trip sequencing;

3) Each order, according to the corresponding increment ti, the waiting sequence is divided into several sub-sequences of length m, respectively, the sub-table is directly inserted sort. Only the increment factor is 1 o'clock, the entire sequence is treated as a table, and the length of the table is the length of the entire sequence.

  Time complexity: O (n^ (1+e)) (where 0<e<1) is highly efficient when the elements are basically ordered. Hill sort is an unstable sorting algorithm.

  Example of a hill sort:

  C + + Implementation Source:

Hill sort
void Shellsort (int a[], int n) { int i, J, Gap; Group for ( GAP=N/2; gap>0; gap/=2) //Direct Insert sort for (i=gap; i<n; i++) for ( J=i-gap; J>=0 & & a[j]>a[j+gap]; J-=GAP) Swap (A[j], a[j+gap]); Printdataarray (A, n);}

From the source code we can also see that the hill sort is based on the direct insertion of the sorting on the basis of the inclusion of a grouping strategy.

Back to top of 3. Direct Select sort (straight Selection sort)

basic idea: in the group of numbers to be sorted, select the minimum (or maximum) number to exchange with the 1th position, and then find the smallest (or largest) number in the remaining number exchange with the 2nd position, and so on, Until the first N-1 element (the penultimate number) and the nth element (the last number) are compared.

  Algorithm Flow:

1) Initially, the array is all unordered area a[0, ..., n-1], so i=0;

2) in the unordered area a[i, ..., n-1], select a minimum element with A[i] Exchange, after the Exchange a[0, ..., I] is ordered area;

3) repeat 2) until i=n-1, sort done.

  Time complexity Analysis :O (n^2), direct selection of sorting is an unstable sorting algorithm.

Examples of direct selection sorting:

  C + + implementation Source:

Directly select sort void straightselectionsort (int a[], int n) {    int i, J, Minindex;    for (i=0; i<n; i++)    {        minindex=i;        for (j=i+1; j<n; j + +)            if (A[j]<a[minindex])                minindex=j;        Swap (A[i], A[minindex]);    }    Printdataarray (A, n);}
Back to top of 4. Heap sort (heap sort)

Heap sorting is a sort of tree selection, which is an effective improvement on direct selection sorting.

  The heap is defined as follows : a sequence with n elements (k1,k2,..., kn), when and only if it satisfies

is called a heap. As can be seen from the definition of a heap, the top element of the heap (that is, the first element) must be the smallest item (the small top heap).
If a heap is stored in a one-dimensional array, the heap corresponds to a complete binary tree , and the values of all non-leaf nodes are not greater than (or not less than) the values of their children, and the value of the root node (the top element of the heap) is the smallest (or largest). Such as:

(a) Large top pile sequence: (96, 83,27,38,11,09)

(b) Small top heap sequence: (12,36,24,85,47,30,53,91)

Basic idea: the sequence of n numbers to be sorted is initially considered as a two-fork tree (one-dimensional array storage binary tree), which is stored sequentially, adjusted to be a heap, the top element of the heap is output, and the smallest (or largest) element of n elements is obtained. This is the minimum (or maximum) number of root nodes in the heap. Then the front (n-1) element is re-adjusted to make it a heap, the output heap top element, to get n elements of the minor (or minor) element. And so on, until there are only two nodes of the heap, and exchange them, and finally get an ordered sequence of n nodes. Call this process a heap sort .

  Time complexity Analysis: O (Nlog (n)), heap ordering is an unstable sorting algorithm.

  Therefore, it is necessary to solve two problems to implement heap sequencing:
1. How do I build a heap of N-ordered numbers?
2. After outputting the top element of the heap, how to adjust the remaining n-1 elements to become a new heap?

Let's start with the second question: How do I re-build the remaining n-1 elements after the top element of the output heap?
How to adjust the small top heap:

1) A heap with m elements, after the output heap top elements, leaving m-1 elements. The heap is destroyed by feeding the base element to the top of the heap (the last element is exchanged with the top of the heap) only because the root node does not satisfy the nature of the heap.

2) Exchange the root node with the smaller elements in the left and right subtrees.

3) If the exchange with Zuozi: If Shozi is destroyed, that is, the root node of Zuozi does not satisfy the nature of the heap, then the Repetition method (2).

4) If the right sub-tree is swapped with the right subtree, the root node of the right subtree does not satisfy the nature of the heap if it is destroyed. Then repeat the method (2).

5) Continue to perform the above exchange operations on sub-trees that do not meet the nature of the heap until the leaf node, the heap is built.

The adjustment process called the self-root node to the leaf node is screened.

  To discuss the first question, how do you initially build the heap of N ordered elements?
Building a Heap method: the process of building a heap on the initial sequence is a process of filtering over and over again.

1) A complete binary tree of n nodes, then the last node is the sub-tree of the first N/2 node.

2) filter starts with a subtree that is the root of the N/2 node, and the subtree becomes a heap.

3) After that, the subtree of each node is then filtered forward, making it a heap until the root node.

Build heap Initial process: unordered sequence: (49,38,65,97,76,13,27,49)


C + + implementation Source:

Heap sort problem Two: How to adjust a heap? void heapadjusting (int a[], int root, int n) {    int temp = A[root];    int child = 2*root+1; Left child's position while    (child<n)    {        //find the smaller one in the child's node        if (Child+1<n && A[child+1]<a[child])            child++;        If the larger child node is smaller than the parent node, replace the parent node with a smaller child node and reset the next parent and child nodes that need to be adjusted.        if (A[root]>a[child])        {            A[root] = A[child];            root = child;            Child = 2*root+1;        }        else break            ;        Assigns the value of the pre-adjusted parent node to the adjusted position.        A[root] = temp;    }} Heap sort problem One: How to initialize a heap? void heapbuilding (int a[], int n) {    //from the last position where the child node is adjusted, the last child node has a position of (n-1)/2 for    (int i= (n-1)/2; i>=0; i-- )        heapadjusting (A, I, n);} Heap sort void heapsort (int a[], int n) {    //Initialize heap    heapbuilding (A, n);    Adjusts for    (int i=n-1; i>0; i--)    {        //swap heap top element and last element        swap (a[0], A[i])        starting from the last node; Adjustments are made after each Exchange        heapadjusting (A, 0, i);}    }

Here to say a few more of the power of the heap sort, heap sort can be regarded as an algorithm, also can be regarded as a data structure. It can be divided into small top piles and large top piles. A typical problem that is closely related to the data structure of the heap is the top-k problem we often encounter.

Let's look at a big data top-k example:

Example: A search engine logs all the retrieved strings used by the user each time they are retrieved through a log file, with a length of 1-255 bytes for each query string. Assume that there are currently 10 million records (these query strings have a high degree of repetition, although the total is 10 million, but if you remove the duplicates, no more than 3 million. The higher the repetition of a query string, the more users are queried for it, the more popular it is. ), please count the most popular 10 query strings, requiring no more than 1G of memory to use.

First of all, we know that this is a typical top-k problem.

The first thing you should think about when you're doing statistics on big data is hash_map. So the first step is to go through all 10 million query, build a size of 3 million hash_map, where the key value is a query, the corresponding value value is the number of queries for that bar.

After we built the Hash_map, our next question was how to find the 10 hottest query in the 3 million query, that is, to use the sorting algorithm. The most efficient time complexity in the sorting algorithm is O (N*log (n)), which is the simplest and most straightforward method. Or we further optimize the topic is to ask for top-k problem, then we can go directly to the first K query to build an array, and then sort it. Iterate through all the remaining queries, and if a query is more than the smallest in the array, remove the smallest query from the array and add the new query. Then the array order is adjusted, and then the traversal is performed sequentially, so that the worst case complexity is O (n*k).

But you can also continue to optimize the search for top-k operations, that is, with the help of small Gan to achieve. Based on the above analysis, can we think of a data structure that can quickly find and move elements quickly? The answer is yes, that's the heap.

The concrete process is that the heap top is the smallest number in the entire heap, now traverse N number, the first to go to the number of k stored in the smallest heap, and assume that they are the largest number of K we are looking for, x1>x2 ... Xmin (heap top), and then traverse the subsequent (n-k) number, each with the heap top element to compare, if the Traverse to Xi is greater than the heap top element Xmin, then the XI into the heap, and then update the entire heap, update the time complexity of LOGK, if xi<xmin, do not update the heap, The complexity of the entire process is O (K) +o ((n-k) *logk) =o (N*LOGK).

A small Gan to solve the top-k problem, please click on this link.

The idea is consistent with the above algorithm, only the algorithm in the algorithm three, we use the smallest heap of this data structure instead of arrays, to find the target element of the time complexity of O (K) to O (LOGK). Then, using the heap data structure, algorithm three, the final time complexity will be reduced to O (N*LOGK), and compared to the algorithm two, there is a relatively large improvement.

Back to top of 5. Bubble sort (Bubble sort)

basic idea: in order to sort a group of numbers, the current is not yet ranked in the range of all the number, the top-down to the adjacent two numbers in turn to compare and adjust, so that the larger number to sink , smaller upward. That is, each time a comparison of two adjacent numbers finds that they are in the opposite order of order, they are interchanged. The effect of each trip is to say that there is no sinking element to sink down.

  Algorithm Flow:

1) Compare the adjacent two elements, if the previous data is greater than the subsequent data, the two data is exchanged, so that the No. 0 element of the array to the N-1 element after a single traversal, the largest element will sink to the first n-1 position of the array;

2) repeat the 2nd) operation until I=n-1.

  Time complexity Analysis: O (n^2), bubble sort is an unstable sorting algorithm.

Example of bubbling sort:

  C + + Implementation Source:

Bubble sort void Bubblesort (int a[], int n) {    int i, J;    for (i=0; i<n; i++)        //j, the starting position is 1, the terminating position is n-i for        (j=1; j<n-i; j + +)            if (a[j]<a[j-1])                Swap (A[j-1], A[J]);    Printdataarray (A, n);}
Back to top of 6. Quick sorting (Quick sort)

Basic idea: The basic idea of the fast sorting algorithm is the thought of division and treatment.

1) First take a number from the series as the base number;

2) partition the series according to the Datum number, which is less than the left side of the base number, and is greater than the base number.

3) Repeat the partition operation, knowing that the interval is only one number.

  Algorithmic Flow: (recursive + pit fill)

1) i=l,j=r, the base number is dug up to form the first pit a[i];

2) j--from the back forward to find a smaller number than it, found after digging this number a[j] fill the previous pit a[i];

3) i++ in the past to find a larger number than it, find and dig out this number to fill the previous pit a[j];

4) repeat 2,3) until i=j, fill in the base number to A[i].

  Time complexity: O (Nlog (n)), but if the initial sequence is basically ordered, the fast sort is degraded to bubble sort instead.

Examples of quick sorting:

(a) a sequencing process:

(b) The whole process of sequencing

  C + + Implementation Source:

Quick sort void QuickSort (int a[], int L, int R) {    if (l<r)    {        int i=l, j=r, temp=a[i];        while (i<j)        {            //right-to-left find elements less than base value A[i]            while (i<j && a[j]>=temp)                j--;            if (i<j)                a[i++]=a[j];            Find elements that are larger than the base value A[i]            while (i<j && a[i]<temp) i++ from left to right                ;            if (i<j)                a[j--]=a[i];        }        Fill the base value into the last pit        a[i]=temp;        Recursive invocation, the thought of        QuickSort (A, L, i-1);        QuickSort (A, i+1, R);}    }
Back to top of 7. Merge sort

The basic idea: merge (merge) sorting method is to combine two (or more than two) ordered tables into a new ordered table, that is, the ordered sequence is divided into several sub-sequences, each sub-sequence is ordered. Then the ordered subsequence is combined into a whole ordered sequence.

  algorithm Flow:(iteration + two sequential series combined into an ordered series)

  Time complexity: O (Nlog (n)), merge algorithm is a stable sorting algorithm.

Merge Sort Example:

  C + + Implementation Source:

Merge two ordered series is an ordered sequence of void Mergearr (int a[], int first, int mid, int last, int temp[]) {    int i = first, j = mid+1;    int m = Mid, n = last;    int k=0;    By comparison, the merge sequence A and B    while (i<=m && j<=n)    {        if (A[i]<a[j])            temp[k++] = a[i++];        else            temp[k++] = a[j++];    }    Inserts the remaining elements of a sequence A or B directly into the new sequence    (i<=m)        temp[k++] = a[i++];    while (j<=n)        temp[k++] = a[j++];    for (i=0; i<k; i++)        a[first+i] = temp[i];} Merge sort void mergesort (int a[], int first, int last, int temp[]) {    if (first<last)    {        int mid = (first+last) /2;        MergeSort (A, first, mid, temp);        MergeSort (A, mid+1, last, temp);        Mergearr (A, first, mid, last, temp);}    }
Back to top of 8. Bucket sort (bucket sort)/radix sort (Radix sort)

Before we say the Cardinal sort, let's first say the bucket sort:

The basic idea is to divide the series into a finite number of buckets. Each bucket is sorted individually (it is possible to use a different sorting algorithm or to sort by using the bucket sort recursively). Bucket sequencing is an inductive result of pigeon nest sorting. When the values in the array to be sorted are evenly distributed, the bucket sort uses the linear time O (n). But bucket sorting is not a comparison sort, and is not affected by the O (n*log N) lower bound.
Simply put, the data is grouped, placed in a bucket, and then the inside of each bucket is sorted.

For example, to sort n integers in the [1..1000] range of size A[1..N]

First, the bucket can be set to a range of size 10, specifically, set the set b[1] store [1..10] integer, set b[2] Storage (10..20] integer, ..., set B[i] Store ((i-1) *10, i*10] integer, i=1,2,.. 100, a total of 100 barrels.

Then, to A[1, ..., n] scan from beginning to end, each a[i] into the corresponding bucket b[j]. Then the 100 barrels in each bucket in the number of sorting, then can be bubbling, selection, and even fast, in general, any sort method can be.

Finally, the numbers in each bucket are output sequentially, and the numbers in each bucket are output from small to large, so that a sequence of all the numbers is ordered.

Suppose there are n numbers, there are m buckets, and if the numbers are evenly distributed, there is an average number of n/m in each bucket. If the numbers in each bucket are sorted quickly, the complexity of the entire algorithm is O (N+m*n/m*log (n/m)) =o (N+N*LOGN-N*LOGM).

As seen from the above, when M approaches N, the sorting complexity of buckets is close to O (n)

  Of course, the calculation of the above complexity is based on the assumption that the input n numbers are evenly distributed. This hypothesis is very strong, the actual application of the effect is not so good. If all the numbers fall into the same bucket, it will degenerate into a general sort.

Some of the above-mentioned sorting algorithms, most of the time complexity are O (N2), there are some sorting algorithm time complexity is O (NLOGN). But the bucket sort can realize the time complexity of O (n). But the downside of bucket sequencing is:

1) First, the space complexity is higher, the additional overhead is required. Sorting has two of the space cost of the array, one for the array to be sorted, and one is the so-called bucket, such as the value to be sorted from 0 to m-1, then need M bucket, this bucket array will be at least m space.

2) The next element to be sorted must be within a certain range and so on.

Bucket sorting is a sort of assignment. Assigning a sort is not a need for a comparison of key codes, but only if you know the specifics of the sequence to be sorted.

The basic idea of assigning a sort: It's a lot of bucket sort.

The Cardinal sort process does not need to compare keywords, but instead uses the "assign" and "collect" procedures to achieve sorting. Their time complexity can reach the linear Order: O (n).

Instance:

Poker in 52 cards, can be divided into two fields according to the color and face value, the size of the relationship is:
Suit: Plum < block < Hearts < Black Heart
Face value: 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9 < < J < Q < K < A

  If the cards are sorted in ascending order by suit and face value, the following sequence is obtained:

  

That is, two cards, if the color is different, regardless of the face value, the lower color of the card is less than the height of the color, only in the case of the same color, the size of the relationship is determined by the size of the face value. This is the multi-key ordering.

To get the sort results, we discuss two sorting methods.
Method 1: First the color sorting, divides it into 4 groups, namely Plum Blossom Group, block group, Hearts Group, Black Heart Group. Each group is then sorted by face value, and finally, the 4 groups are connected together.
Method 2: First give 13 numbered groups (2nd, 3rd, ..., A) by 13 denominations, and place the cards in the corresponding numbering group in denominations, divided into 13 piles. Then according to the color given 4 number group (plum, block, Hearts, Black Heart), the group of 2nd in the card out into the corresponding suit group, and then the group of 3rd of the cards are taken out into the corresponding suit group, ..., so that 4 suits are in order, and then, the 4 suit group is connected in turn.

The sequence of n-elements to be ordered consists of the D key code {K1,K2,...,KD}, which means that the sequence pair key code {K1,K2,...,KD} order refers to: for any two records in the sequence R[i] and R[j] (1≤i≤j≤n) to meet the following ordered relationship:

Where K1 is called the most thematic key code, KD is called the most important bit key code.

Two kinds of multi-key code sorting methods:

Multi-key sorting is sorted in two ways: from the most-important-to-most-critical-to-most-key or from the first-to-most-primary-to-maximum-key-code sequence.

  Highest priority (most significant Digit first) method, or MSD method:

1) first sorted by K1 group, the sequence is divided into several sub-sequences, the same sequence of records, the key code k1 equal.

2) The groups are divided into subgroups by K2, after which the key codes continue to be grouped until the sub-groups are sorted by the most-important KD.

3) by connecting the groups together, an ordered sequence is obtained. The method of playing poker in the sort of suit and denomination is the MSD method.

  Lowest bit priority (Least significant Digit first) method, hereinafter referred to as the LSD method:

1) Start with the KD first, then sort the kd-1, repeat until the K1 sorted by the smallest sub-sequence.

2) Finally, the various sub-sequences are connected, you can get an ordered sequence, poker by color, the value of the method described in the order of the second is the LSD method.

The basic idea of chain base ordering based on LSD method:

The idea of "multi-keyword sorting" implements "single-keyword ordering". A single keyword of a numeric or character type can be thought of as a multi-keyword consisting of multiple digits or characters, which can be sorted by the method of allocation-collection, which is called the Cardinal Sort, where each number or character may have a number of values called cardinality. For example, poker's color base is 4, the face value base is 13. In the arrangement of playing cards, can be arranged according to the color, can also be sorted by face value first. According to the color arrangement, first by the red, black, square, flower order into 4 stacks (allocation), and then stacked together in this order (collection), and then in the order of face value divided into 13 stacks (allocation), and then stacked together in this order (collect), so two times allocated and collected can be arranged in order to arrange cards.

Base sort: is sorted by low, then collected, then sorted by high order, then collected, and so on, until the highest bit. Sometimes some properties are prioritized, sorted by low priority, and sorted by high priority. The final order is high priority high in front, high priority is the same low priority high in front. The cardinality sort is based on sorting separately and is collected separately, so it is stable .

Back to top of 9. Performance comparison of various sorting algorithms

  1) Various sequencing stability, time complexity and spatial complexity summary:

Error: The spatial complexity of the above fast sorting algorithm should be changed to O (log2n).

  Why is the spatial complexity of the fast sorting algorithm O (log2n) ~o (n)?

The implementation of the fast sorting algorithm needs the auxiliary of the stack, the recursion depth of the stack is O (log2n), and when the whole sequence is ordered, the depth of the stack reaches O (n).

We compare the time complexity function to the situation:

2) in terms of time complexity:

(1) Order of Square (O (n2))
Types of simple sorting: direct insertion, direct selection and bubble sorting;
(2) Order of linear logarithmic order (O (N*LOGN))
Quick sorting, heap sorting, and merge sorting;
(3) O (n1+§)) ordering, § is a constant between 0 and 1

Hill sort

(4) Order of linear Order (O (n))
The base sort, in addition to the bucket, the box sort.

Description

  (1) When the original table is ordered or basically ordered, the direct insertion of sort and bubble sort will greatly reduce the number of comparisons and move records, time complexity can be reduced to O (n);

(2) While the quick sort is the opposite, when the original table is basically ordered, it will degenerate to bubble sort, and the time complexity is increased to O (n^2);

(3) The order of the original table, the simple selection of sorting, heap sorting, merge sorting and the time complexity of the base sorting has little effect.

3) Stability: Sorting algorithm Stability: If the sequence to be sorted, there are multiple records with the same keyword, sorted, the relative order of these records remain unchanged, the algorithm is said to be stable, and if the relative order of the records has changed, it is said that the algorithm is unstable.

Stability Benefits: If the sorting algorithm is stable, sort from one key, then sort from another key, the result of the first key sort can be used for ordering the second key. The Cardinal sort is like this, first by the low sort, successively by the high order, the low-level same element its sequence is also the same time will not change. In addition, if the sorting algorithm is stable, redundant comparisons can be avoided.

  Stable sorting algorithms: bubble sort, insert sort, merge sort, and Cardinal sort.

Not a stable sorting algorithm: Choose Sort, quick sort, hill sort, heap sort.

4) Select the Sorting algorithm guidelines:

Each sorting algorithm has its advantages and disadvantages. Therefore, when practical, it is necessary to choose according to different circumstances, or even combine various methods to use.

Select the basis for the sorting algorithm:

There are many factors that influence sequencing, and algorithms with low average time complexity are not necessarily optimal. Conversely, sometimes an algorithm with a high average time complexity may be better suited to some particular situation. At the same time, the selection algorithm should also consider its readability, in order to facilitate the maintenance of software. In general, there are four things to consider:

(1) The size of the number of records to be sorted n;

(2) The size of the data volume of the record itself, that is, the size of other information except the keywords in the record;

(3) The structure and distribution of the keywords;

(4) Requirements for sequencing stability.

Set the number of elements to be sorted n.

(1) when n is larger, it should be sorted by the time complexity O (N*LOGN): Quick sort, heap sort, or merge sort.

Quick sort: is currently considered the best method based on the comparison of internal sorting, when the keywords to be sorted are randomly distributed, the average time of the fast sorting is shortest;

Heap sort: If the memory space allows and requires stability;

Merge sort: It has a certain number of data movement, so we may have a combination of the insertion sort, first get a certain length of sequence, and then merge, the efficiency will be improved.

(2) when n is large, memory space is allowed and stability is required: merge sort

(3) when n is small, direct insertion or direct selection can be used.

Direct Insert Sort: When the elements are distributed in an orderly manner, inserting the sort directly will significantly reduce the number of comparisons and moves recorded.

Direct selection Sort: When the elements are distributed in order, if stability is not required, select Direct Select Sort.

(4) generally do not use or not directly use the traditional bubble sort.

(5) Base order
It is a stable sorting algorithm, but it has some limitations:
    1, the key words can be decomposed;
2, record the number of key bits less, if dense better;
3, if it is a number, it is best to be unsigned, otherwise it will increase the corresponding mapping complexity, you can first of its positive and negative sorting.

Eight sorting algorithms

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.