From the books, here's the record.
1. Complexity of time (average time complexity)
Insert Sort: O (N2);
Hill sort: O (N2) Hibbard increment of hill sort average: O (N7/6)
Heap Sort: O (Nlogn) (heap is built each time, more times are compared; in order to reduce overhead, each deleted data is put to the head (from small to large row) or tail (from large to smaller rows)
Merge sort: O (Nlogn) (every time you need to copy an array, it is not commonly used, it takes up memory, but in the vast amount of data, that is, the sort of data in a file is based on that sort of thinking)
Quick sort: O (Nlogn) (Quick sort can be optimized, mainly how to group, that is, the so-called "pivot element" selection, can take left,right,middle median)
Bucket sequencing: O (N) (this consumes extra memory and may be suitable for some outstanding scenarios)
2. Select which is appropriate
According to the statistics on the books (not the bucket sort), the elements within 20 are not as good as using the insertion sort, the code is easy to understand, time-consuming and fast-running the same;
Within 1000 elements, the optimized hill sort can be matched with an optimized quick sort, and within 100 elements, Hill sorts faster.
More elements, preferably with a quick sort.
Comparison of several sorts