Choose Sort, quick sort, hill sort, heap sort not a stable sorting algorithm,
bubble sort, insert sort, merge sort, and Cardinal sort are stable sorting algorithms.
Bubbling method:
This is the most primitive, and well-known, slowest algorithm. The origin of his name is because its work looks like bubbling: The complexity is O (n*n). When the data is in the positive order, there will be no exchange. The complexity is O (0).
Direct Insert Sort: O (n*n)
Select sort: O (n*n)
Quick sort: Average time complexity log2 (n) *n, the highest of all internal sorting methods, and most often best.
Merge sort: log2 (n) *n
Heap Sort: log2 (n) *n
Hill sort: The complexity of the algorithm is 1.2 power of n
Here I do not give an analysis of the behavior, because this is very simple, we directly to analyze the algorithm:
First of all, let's consider the ideal situation.
1. The size of the array is a power of 2, so that the split can always be divisible by 2. Assume a K-square of 2, or K=LOG2 (n).
2. Each time we select a value that is exactly the median, then the array can be divided equally.
First level recursion, loop n times, second layer cycle (N/2) ...
So there are n+2 (N/2) +4 (N/4) +...+n* (n/n) = N+N+N+...+N=K*N=LOG2 (n) *n
So the algorithm complexity is O (log2 (n) *n)
The other situation is only worse than this, the worst case is that each time the selected middle is the minimum or maximum value, then he will become the Exchange method (because the use of recursion, the situation is worse). But how likely do you think this is going to happen? Oh, you don't have to worry about it at all. Practice has proved that most of the cases, fast sequencing is always the best.
If you're worried about this problem, you can use heap sorting, which is a stable O (log2 (n) *n) algorithm, but usually slower than fast sorting (because the heap is being reorganized).
These days written several times, a continuous encounter with a common ranking algorithm stability of the problem, often or more choice, for me and the same as I am unsure of the classmate is not a can easily jump to the conclusion of the topic, of course, if you have written before the writing has been remembered in the data structure of the book which is stable, which is not stable, It should be easy to handle.
This article is intended for people who can't remember this or want to really understand why it's stable or unstable.
First of all, the stability of the sorting algorithm should be known, popularly speaking is to ensure that the first 2 equal number of the sequence before and after the order and the ordering of their two before and after the position of the same order. In a simple formalization, if ai = Aj, the AI is originally in position, the AI is still in front of the Aj position.
Second, describe the benefits of stability. If the sorting algorithm is stable, sort from one key and then sort from another, the result of sorting the first key can be used to sort the second key. The Cardinal sort is like this, first by the low sort, successively by the high order, the low-level same element its sequence is also the same time will not change. In addition, if the sorting algorithm is stable, the number of elements exchanged may be less (personal sense, not confirmed) for comparison-based sorting algorithms.
Back to the topic, now analyze the stability of the common sorting algorithms, each giving a simple reason.
(1) Bubble sort
The bubble sort is to move the small element forward or the large element back. The comparison is an adjacent two element comparison, and the interchange also occurs between these two elements. So, if the two elements are equal, I think you will not be bored to exchange them again, if the two equal elements are not adjacent, then even through the preceding 22 exchange two adjacent together, this time will not be exchanged, so the same elements of the order has not changed, so bubble sort is a stable sorting algorithm.
(2) Select sort
The choice of sorting is to choose the smallest current element for each location, such as selecting the smallest one for the first position, selecting the second small for the second element in the remaining element, and so on, until the n-1 element, the nth element is not selected, because it is left with one of its largest elements. Then, in a trip, if the current element is smaller than an element, and the small element appears behind an element that is equal to the current element, then the post-swap stability is destroyed. Compare the awkward, for example, sequence 5 8 5 2 9, we know that the first time to select the 1th element 5 and 2 Exchange, then the original sequence of 2 5 of the relative sequence is destroyed, so the selection of sorting is not a stable sorting algorithm.
(3) Insert sort
An insert sort is an element that is inserted one at a time, based on an already ordered small sequence. Of course, there are only 1 elements at the beginning of this ordered sequence, which is the first element. The comparison starts at the end of the ordered sequence, that is, the element that you want to insert and the one that is already in order, is inserted directly behind it if it is larger than it is, or until it is found where it is inserted. If you encounter an element equal to the insert, the insertion element places the element you want to insert behind the equal element. So, the order of the equal elements is not changed, the order from the original unordered sequence is the order of the sequence, so the insertion sort is stable.
(4) Quick Sort
The quick sort has two directions, the left I subscript goes right, when A[i] <= A[center_index], where Center_index is the array subscript of the central element, generally takes the No. 0 element of an array. and the right J subscript goes left, when a[j] > A[center_index]. If I and J are not moving, I <= J, Exchange A[i] and A[j], repeat the process above until i>j. Exchange A[j] and A[center_index], to complete a quick sort of a trip. When the central element and A[j] are switched, it is very possible to disrupt the stability of the previous element, such as the sequence is 5 3 3 4 3 8 9 10 11, now the central element 5 and 3 (5th element, subscript from 1) The exchange will be the stability of element 3 is disturbed, so fast sorting is an unstable sorting algorithm, Instability occurs at the moment of the exchange of central elements and a[j].
(5) Merge sort
The merge sort is to divide the sequence recursively into the short sequence, the recursive exit is the short sequence only 1 elements (think directly ordered) or 2 sequences (1 comparison and exchange), then merges each ordered segment sequence into an orderly long sequence, merges continuously until the original sequence is all ordered. It can be found that when 1 or 2 elements, 1 elements are not exchanged, and 2 elements are equal in size and no one is intentionally exchanged, which does not destabilize the stability. So, in the process of merging short ordered sequences, stability is not compromised. No, we can guarantee that if the two current elements are equal, we keep the elements in the preceding sequence in front of the result sequence, which guarantees stability. Therefore, the merge sort is also a stable sorting algorithm.
(6) Base order
The cardinality sort is sorted by low, then collected, sorted by high order, then collected, and so on, until the highest bit. Sometimes some properties are prioritized, sorted by low priority, then sorted by high priority, and the final order is high priority high, high priority is the same low-priority high. Base sorting is based on sorting separately and is collected separately, so it is a stable sorting algorithm.
(7) Hill sort (shell)
Hill sort is the insertion of elements according to different steps, when the first element is very disordered, the step is the largest, so the number of elements inserted in the order is very small, fast; When the elements are basically ordered, the step size is very low, and the insertion sort is very efficient for ordered sequences. So, the time complexity of hill sorting would be better than O (n^2). Because of the number of insertions, we know that one insert sort is stable and does not change the relative order of the same elements, but in different insertion sorts, the same elements may move in their own insert sort, and finally their stability will be disturbed, so the shell sort is unstable.
(8) Heap Sorting
We know that the heap structure is the child of node I for the 2*i and 2*I+1 nodes, the large top heap requires that the parent node is greater than or equal to its 2 child nodes, and the small top heap requires the parent node to be less than or equal to its 2 child nodes. In a sequence of n, the process of heap sequencing is to select the largest (large top heap) or the smallest (small top heap) from the beginning of N/2 and its child nodes by a total of 3 values, and the choice between the 3 elements will of course not destabilize. But when for n/2-1, N/2-2, ... 1 When these parent nodes select an element, the stability is broken. It is possible that the N/2 of the parent node Exchange passes the latter element, while the n/2-1 parent node does not exchange the subsequent same element, then the stability between the 2 identical elements is destroyed. So, heap sorting is not a stable sorting algorithm
1 Quick Sort (QuickSort)
Quick Sort is an in-place sort, divide-and-conquer, large-scale recursive algorithm. Essentially, it is the in-place version of the merge sort. The quick sort can be made up of four steps below.
(1) If there is no more than 1 data, return directly.
(2) General selection of the leftmost value of the sequence as pivot data.
(3) The sequence is divided into 2 parts, part of which is larger than the pivot data, and the other part is smaller than the pivot data.
(4) Use recursion to sort the series on both sides.
Fast sorting is faster than most sorting algorithms. Although we can write algorithms that are faster than fast sorting in some special cases, there is usually no faster than that. Fast sorting is recursive, and it is not a good choice for machines with very limited memory.
2 Merge sort (mergesort)
The merge sort first breaks down the sequence to be sorted, from 1 to 2, 2 to 4, and then to a group of 1, which can be sorted and then merged back into the original sequence so that all the data can be sorted. A merge sort is slightly faster than a heap sort, but requires more memory space than the heap, because it requires an extra array.
3 heap sequencing (heapsort)
Heap sequencing is suitable for situations where data volumes are very large (millions of data).
Heap ordering does not require a large number of recursive or multidimensional staging arrays. This is appropriate for a very large sequence of data volumes. For example, more than millions of records, because the fast sort, the merge sort all uses the recursive design algorithm, when the data volume is very big, may have the stack overflow error.
The heap sort will build all the data into a heap, the largest data at the top of the heap, and then exchange the heap top data and the last data of the sequence. Then rebuild the heap again, exchange the data, and then go down and sort all the data.
4 Shell sort (shellsort)
Shell sorting divides the data into groups, sorts each group first, and then inserts all the elements one at a time to reduce the number of times the data is exchanged and moved. The average efficiency is O (NLOGN). The rationality of grouping will have an important effect on the algorithm. Now use the D.e.knuth grouping method more.
The shell sort is 5 times times faster than the bubble sort, twice times faster than the insertion sort. Shell sorting is much slower than quicksort,mergesort,heapsort. But it is relatively simple, it is suitable for the amount of data under 5000 and speed is not particularly important occasions. It is very good to repeat the sequence of numbers with a smaller amount of data.
5 Insert Sort (insertsort)
Insert sort by inserting the values in the sequence into a sequence that is already sorted until the end of the sequence. Insert sort is an improvement to the bubbling sort. It's twice times faster than bubble sort. It is generally not necessary to use the insert sort when the data is greater than 1000, or to repeat the sequence of more than 200 data items.
6 Bubble sort (bubblesort)
Bubble sort is the slowest sort algorithm. It is the least efficient algorithm in practical application. It compares each element in the array by one trip to another, so that the larger data sinks and the smaller data Bandi. It is an O (n^2) algorithm.
7 Interchange Sort (exchangesort) and select Sort (selectsort)
Both of these sorting methods are sorting algorithms for the switching method, and the efficiency is O (n2). In the actual application is in and bubble sort basically same position. They are just the initial stages of the development of sorting algorithms, which are less used in practice.
8 Cardinal Sort (radixsort)
The Cardinal sort and the usual sorting algorithm do not go the same way. It is a relatively novel algorithm, but it can only be used for the ordering of integers, if we want to apply the same method to floating point numbers, we must understand the storage format of floating-point numbers, and in a special way to map floating-point numbers to integers, and then mapping back, this is a very troublesome thing, therefore, its use is not much. And, most importantly, this algorithm also requires more storage space.
9 Summary
Here is a general table that summarizes the characteristics of all of our common sorting algorithms.
Sorting method |
Average Time |
Worst case scenario |
Degree of stability |
Extra Space |
Note |
Bubble |
O (N2) |
O (N2) |
Stability |
O (1) |
N Hours better |
Exchange |
O (N2) |
O (N2) |
Not stable |
O (1) |
N Hours better |
Choose |
O (N2) |
O (N2) |
Not stable |
O (1) |
N Hours better |
Insert |
O (N2) |
O (N2) |
Stability |
O (1) |
Most are sorted better |
Base |
O (LOGRB) |
O (LOGRB) |
Stability |
O (N) |
B is the true number (0-9), R is Cardinal number (1000) |
Shell |
O (NLOGN) |
O (NS) 1<s<2 |
Not stable |
O (1) |
S is the selected grouping |
Fast |
O (NLOGN) |
O (N2) |
Not stable |
O (NLOGN) |
Better when N is big |
Merge |
O (NLOGN) |
O (NLOGN) |
Stability |
O (1) |
Better when N is big |
Heap |
O (NLOGN) |
O (NLOGN) |
Not stable |
O (1) |
Better when N is big |
The following is a generic sort based on a template:
This procedure I think there is no need for analysis, we can look at it. Do not understand can be asked on the forum.
MyData.h file
///////////////////////////////////////////////////////
Class CMyData
{
Public
CMyData (int index,char* strdata);
CMyData ();
Virtual ~cmydata ();
int m_iindex;
int Getdatasize () {return m_idatasize;};
Const char* GetData () {return m_strdatamember;};
The operator is overloaded here:
cmydata& operator = (CMyData &srcdata);
BOOL Operator < (cmydata& data);
BOOL operator > (cmydata& data);
Private
char* M_strdatamember;
int m_idatasize;
};
////////////////////////////////////////////////////////
MyData.cpp file
////////////////////////////////////////////////////////
Cmydata::cmydata ():
M_iindex (0),
M_idatasize (0),
M_strdatamember (NULL)
{
}
Cmydata::~cmydata ()
{
if (m_strdatamember! = NULL)
Delete[] M_strdatamember;
M_strdatamember = NULL;
}
Cmydata::cmydata (int index,char* strdata):
M_iindex (Index),
M_idatasize (0),
M_strdatamember (NULL)
{
M_idatasize = strlen (strdata);
M_strdatamember = new Char[m_idatasize+1];
strcpy (M_strdatamember,strdata);
}
cmydata& cmydata::operator = (cmydata &srcdata)
{
M_iindex = Srcdata.m_iindex;
M_idatasize = Srcdata.getdatasize ();
M_strdatamember = new Char[m_idatasize+1];
strcpy (M_strdatamember,srcdata.getdata ());
return *this;
}
BOOL Cmydata::operator < (cmydata& data)
{
Return m_iindex<data.m_iindex;
}
BOOL Cmydata::operator > (cmydata& data)
{
Return m_iindex>data.m_iindex;
}
///////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////
Main program Section
#include <iostream.h>
#include "MyData.h"
Template <class t>
void Run (t* pdata,int left,int right)
{
int i,j;
T middle,itemp;
i = left;
j = right;
The following comparisons call our overloaded operator functions
Middle = pdata[(left+right)/2]; Find the middle value
do{
while ((Pdata[i]<middle) && (i<right))//scan from left is larger than the number of median
i++;
while ((Pdata[j]>middle) && (j>left))//scan from right is greater than the number of median
j--;
if (I<=J)//Find a pair of values
{
Exchange
ITemp = Pdata[i];
Pdata[i] = Pdata[j];
PDATA[J] = iTemp;
i++;
j--;
}
}while (I<=J);//If the subscript on both sides of the scan is staggered, stop (complete once)
When the left part has a value (LEFT<J), the left half of the recursion
if (LEFT<J)
Run (PDATA,LEFT,J);
When the right part has a value (right>i), the right half of the recursion
if (right>i)
Run (pdata,i,right);
}
Template <class t>
void QuickSort (t* pdata,int Count)
{
Run (pdata,0,count-1);
}
void Main ()
{
CMyData data[] = {
CMyData (8, "xulion"),
CMyData (7, "Sanzoo"),
CMyData (6, "Wangjun"),
CMyData (5, "vckbase"),
CMyData (4, "jacky2000"),
CMyData (3, "cwally"),
CMyData (2, "Vcuser"),
CMyData (1, "Isdong")
};
QuickSort (data,8);
for (int i=0;i<8;i++)
cout<<data[i].m_iindex<< "" <<data[i]. GetData () << "/n";
cout<< "/n";
}