Summarize and compare the eight sort algorithms, and sort out the eight sort algorithms.
1. Quick Sort)
Quick sorting is a large-scale recursive algorithm that sorts data in the local area separately. Essentially, it is the local version of Merge Sorting. The quick sorting can be composed of the following four steps.
(1) If no more than one data entry is returned.
(2) generally, the leftmost value of the sequence is used as the pivot data.
(3) divide the sequence into two parts. One part is greater than the pivot data, and the other part is smaller than the pivot data.
(4) use recursive sorting sequence on both sides.
Quick sorting is faster than most sorting algorithms. Although we can write algorithms faster than quick sorting in some special cases, it is generally not faster than it. Quick sorting is recursive. It is not a good choice for machines with very limited memory.
2. MergeSort)
Merge Sorting first breaks down the sequence to be sorted, divides it from 1 into 2, 2 into 4, and breaks it down in sequence. When there is only one group, these groups can be sorted, then merge the data into the original sequence, so that all data can be sorted. Merging and sorting is a little faster than heap sorting, but it requires more memory space than the heap sorting because it requires an additional array.
3 HeapSort)
Heap sorting is suitable for scenarios with a large amount of data (Millions of data ).
Heap sorting does not require a large number of recursive or multi-dimensional temporary arrays. This is suitable for a sequence with a very large amount of data. For example, if there are more than millions of records, because of fast sorting, Merge Sorting uses recursion to design algorithms. When the data volume is very large, a stack overflow error may occur.
Heap sorting builds all the data into a heap. The largest data is on the heap top, and then the data on the heap top is exchanged with the last data in the sequence. Next, rebuild the heap and exchange data. Then, sort all the data.
4 Shell sorting (ShellSort)
Shell sorting divides data into different groups, sorts each group first, and then inserts and sorts all elements once to reduce the number of data exchanges and moves. The average efficiency is O (nlogn ). The rationality of grouping has an important impact on algorithms. Now we use the D. E. Knuth grouping method.
Shell sorting is five times faster than Bubble sorting, and roughly two times faster than insert sorting. Shell sorting is much slower than QuickSort, MergeSort, and HeapSort. However, it is relatively simple. It is suitable for scenarios where the data volume is below 5000 and the speed is not very important. It is very good for repeated sorting of series with a small amount of data.
5. InsertSort)
Sort by inserting the values in the sequence into a sorted sequence until the end of the sequence. Insert sorting is an improvement on Bubble sorting. It is twice faster than Bubble sorting. Generally, you do not need to use insert sorting when the data is greater than 1000, or repeat the sequence of data items that exceed 200.
6. BubbleSort)
Bubble Sorting is the slowest sorting algorithm. In practice, it is the most efficient algorithm. It compares every element in the array one by one, causing large data to sink and small data to rise. It is an O (n ^ 2) algorithm.
7. ExchangeSort and SelectSort)
Both sorting methods are exchange method sorting algorithms, and the efficiency is O (n2 ). In practice, it is in the same position as Bubble sorting. They are only the initial stages of the development of sorting algorithms and are rarely used in practice.
RadixSort)
The base Sorting Algorithm and the general Sorting Algorithm do not follow the same route. It is a novel algorithm, but it can only be used for sorting integers. If we want to apply the same method to floating point numbers, we must understand the storage format of floating point numbers, it is very troublesome to map a floating point number to an integer in a special way and then map it back. Therefore, it is not used much. In addition, the most important thing is that such algorithms also require a large amount of storage space.
9 Summary
The following is a general table that roughly summarizes the features of all of our common sorting algorithms.
Sorting Method |
Average time |
Worst case |
Stability |
Extra space |
Remarks |
Bubble |
O (n2) |
O (n2) |
Stability |
O (1) |
N hours is better |
Exchange |
O (n2) |
O (n2) |
Unstable |
O (1) |
N hours is better |
Select |
O (n2) |
O (n2) |
Unstable |
O (1) |
N hours is better |
Insert |
O (n2) |
O (n2) |
Stability |
O (1) |
Most of the sorted items are better |
Base |
O (logrd) |
O (logrd) |
Stability |
O (n) |
D is the number of key words (0-9 ), R is the base (10 hundred) |
Shell |
O (nlogn) |
O (ns) 1 <s <2 |
Unstable |
O (1) |
S is the selected group |
Fast |
O (nlogn) |
O (n2) |
Unstable |
O (nlogn) |
N is better |
Merge |
O (nlogn) |
O (nlogn) |
Stability |
O (1) |
N is better |
Heap |
O (nlogn) |
O (nlogn) |
Unstable |
O (1) |
N is better |
Which of the eight sorting algorithms is the comparison size sorting method?
It is a Bubble sorting method. Review: if the initial status of the record sequence is "forward", the Bubble sorting process only needs to be sorted one by one and only needs to be compared n-1 times during the sorting process, otherwise, if the initial status of the record sequence is "backward", n (n-1)/2 comparisons and records are required. Therefore, the total time complexity of Bubble Sorting is O (n * n ).
Summary of various sorting
I. Simple Sorting Algorithm
The program is relatively simple, so no comments are added. All programs provide complete Running code and run it in my VC environment. Because it does not involve content of MFC and WINDOWS, there should be no problems on borland c ++ platforms. The running process is illustrated after the code, and it is helpful for understanding.
1. Bubble Method:
This is the most primitive and well-known slowest algorithm. The origin of his name is because its work seems to be bubbling:
# Include <iostream. h>
Void BubbleSort (int * pData, int Count)
{
Int flag = 1;
Int iTemp;
For (int I = 1; flag; I ++)
{
For (int j = Count-1; j> = I; j --)
{
Flag = 0;
If (pData [j] <pData [J-1])
{
ITemp = pData [J-1];
PData [J-1] = pData [j];
PData [j] = iTemp;
Flag = 1;
}
}
}
}
Void main ()
{
Int data [] = {10, 9, 8, 7, 6, 5, 4 };
BubbleSort (data, 7 );
For (int I = 0; I <7; I ++)
Cout <data [I] <"";
Cout <"\ n ";
}
Reverse Order (worst case)
First round: 10, 9, 8, 7-> 10, 9, 7-> 10, 7, 9-> 7, 10, 9, 8 (three exchanges)
Round 2: 7, 10, 9-> 7, 10, 8-> 7, 8, 9 (2 exchanges)
First round: 7, 8, 10, 9-> 7, 8, 9, 10 (switching once)
Cycles: 6
Number of exchanges: 6
Others:
First round:,->, (exchange twice)
Round 2: 7, 8, 10, 9-> 7, 8, 10, 9-> 7, 8, 10, 9 (0 exchanges)
First round: 7, 8, 10, 9-> 7, 8, 9, 10 (switching once)
Cycles: 6
Number of exchanges: 3
We have given the program section above, and now we analyze it: here, the main part that affects our algorithm performance is loop and exchange. Obviously, the more times, the worse the performance. From the above program, we can see that the number of cycles is fixed, which is 1 + 2 +... + n-1. The formula is 1/2 * (n-1) * n. Note that the O method is defined as follows:
If there is a constant K and the starting point n0, so when n> = n0, f (n) <= K * g (n), f (n) = O (g (n )). Now let's look at 1/2 * (n-1) * n. When K = 1/2, n0 = 1, g (n) = n * n, 1/2 * (n-1) * n <= 1/2 * n = K * g (n ). So f (n) = O (g (n) = O (n * n ). So the complexity of our program loop is O (n * n ). Let's look at the exchange. We can see from the table following the program that the two cases share the same loop and the exchange is different. In fact, the exchange itself has a great relationship with the degree of order of the data source. when the data is in reverse order, the number of exchanges is the same as the number of cycles (each cycle will be exchanged ), the complexity is O (n * n ). When the data is in positive order, there will be no exchange. The complexity is O (0 ). It is in the intermediate state in disordered order. It is precisely by... the remaining full text>