[Data structure] Sorting Algorithm: Hill, merge, fast, and heap sorting

Source: Internet
Author: User

I entered school in September and graduated in July. It ended my pleasant and rich college life at the Software Institute. This series is a review of the four-year study of professional courses, see: http://blog.csdn.net/xiaowei_cqu/article/details/7747205

Sorting algorithms are very common and basic algorithms. There are many sorting methods, such as insert sorting, select sorting, Hill sorting, Merge Sorting, fast sorting, and heap sorting. This experiment focuses on implementation: Hill sorting, Merge Sorting, fast sorting, and heap sorting insert sorting. Simply put, the smallest entries in the unordered queue are inserted at the end of the sorted queue: selecting sorting and inserting are a bit like selecting sorting and insertion, which is the position that should be inserted from the first entry in unsorted order to the sorted position: Hill sorting is a sort of insert sorting, is an improvement for directly inserting sorting algorithms.
The basic idea of hill sorting is: first take an incremental increment smaller than count and divide the record in the table into increment groups. All the records with increment distance are in the same group, now, insert_sort is directly inserted into each group, and the increment regrouping and sorting are reduced to know that increment = 1, that is, all records are placed in the same group for direct insertion and sorting. [Related experiment] First, The sortable_list table of the child is dispatched from the class table list. To facilitate definition, we can reload such constructors.
template<class Record>Sortable_list<Record>::Sortable_list(const Record A[],int size){for(int i=0;i<size;i++)insert(i,A[i]);}

1. Compile the shell_sort function. In the function, we first define increment = 0, observe the question requirements, and we can see the relationship increment = increment/2 in the loop; use a loop to sort each group after each group, after sorting, we add the sort_interval () function; the direct insertion sorting used in the sorting of each group, so sort_interval can be implemented as follows: define a temporary sortable_list object TMP to record the Group of each group, use insertion_sort for TMP, and then compile the function insertion_sort ();
2. An important step to sort a table is to change the record to the corresponding position, that is, exchange. Therefore, write the swap function;
3. in order to output the sorting result, we compile another global function print_out, which is called by the List member function traverse (). The call process is placed in swap, that is, each exchange (or movement) is regarded as a sort.
The Algorithm functions are as follows:

Template <class record> void sortable_list <record>: shell_sort ()/* post: the entries of the sortable_list have been rearranged so that the keys in all the entries are sorted into nondecreasing order. uses: sort_interval. */{int increment, // spacing of entries in subliststart; // startpoint of sublistincrement = count; do {increment = increment/2; for (START = 0; start <increment; Start ++) {sort_interval (START, increment); // modified insertion sorttraverse (print_out); cout <Endl ;}} while (increment> 1 );} template <class record> void sortable_list <record>: sort_interval (INT start, int increment) {sortable_list <record> temp; Int J = 0; For (INT I = start; I <size (); I = I + increment) {record temp_record; Retrieve (I, temp_record); temp. insert (J, temp_record); j ++;} temp. insertion_sort (); j = 0; For (int K = start; k <size (); k + = increment) {record temp_record; temp. retrieve (J, temp_record); Replace (K, temp_record); j ++ ;}} template <class record> void sortable_list <record >:: insertion_sort ()/* post: the entries of the sortable_list have been rearranged so that the keys in all the entries are sorted into nondecreasing order. uses: Methods for the class record; the contiguous list */{int first_unsorted; // position of first unsorted entryint position; // searches sorted part of listrecord current; // holds the entry emporarily removed from listfor (first_unsorted = 1; first_unsorted <count; first_unsorted ++) if (entry [first_unsorted] <entry [first_unsorted-1]) {position = supplied; current = entry [first_unsorted]; // pull unsorted entry out of the list. do {// shift all entries until the proper position is found entry [position] = entry [position-1]; Position --; // position if empty .} while (position> 0 & entry [position-1]> current); entry [position] = Current ;}// for other auxiliary functions, see the source file.
[Experimental results] merge, sort, merge, and sort combine two sorted tables into one table.
The basic idea of the merge sort algorithm is: first divide a table into two tables (when the number is an odd number, make the left table have more elements than the right table ). Sort the two tables separately, and then merge the two sorted tables. The idea of merging is like mixing two Luo cards into one stack, each time taking the smallest card on top. [Related experiments]

1. The above sortable_list is still used
2. According to the Merge Sorting idea, each sub-table still uses the Merge Sorting, which can be implemented through recursion. Therefore, compile the recursive function recursive_merge_sort () and merge the sorted sub-tables. Therefore, compile the auxiliary function Merge () for merging sub-tables ()
3. to output the sorting results for each trip, add traverse (print_out) To merge during merge. // However, due to recursive call problems, we still cannot see the process many times.

template<class Record>void Sortable_list<Record>::merge(int low,int high){Record *tmp=new Record[high-low+1];int index=0;int index1=low,mid=(low+high)/2,index2=mid+1;while(index1<=mid&&index2<=high){if(entry[index1]<entry[index2])tmp[index++]=entry[index1++];elsetmp[index++]=entry[index2++];}while(index1<=mid)tmp[index++]=entry[index1++];while(index2<=high)tmp[index++]=entry[index2++];for(index=low;index<=high;index++)entry[index]=tmp[index-low];delete []tmp;traverse(print_out);cout<<endl;}template<class Record>void Sortable_list<Record>::recursive_merge_sort(int low,int high)/*Post: The entries of the sortable list between        index low and high have been rearranged so thattheir keys are sorted into nondecreasing order.  Uses: The contiguous list*/{if(high>low){recursive_merge_sort(low,(high+low)/2);recursive_merge_sort((high+low)/2+1,high);merge(low,high);}}template<class Record>void Sortable_list<Record>::merge_sort()/* Post: The entries of the sortable list have been rearranged so that         their keys are sorted into nondecreasing order.   Uses: The contiguous list*/{recursive_merge_sort(0,size()-1);}
[Experiment results]

Quick sorting is an improvement in Bubble sorting.
The basic idea of the quick sorting algorithm is to find a vertex in each sort and divide the table into two independent parts. All the records in one part are smaller than the values, the other part is larger than the limit, and then sort the data in this way. [Related experiments] 1. The above sortable_list is still used.
2. according to the idea of fast sorting, each sort divides the table into two parts and then performs fast sorting. Therefore, it can be implemented through recursion. In order to call the recursive function, first, we compile the recursive_quick_sort (INT low, int high) function that gives the parameters to sort the range. // The reason why the original quick_sort is not directly written into the recursive function is that, it is convenient for users to call to avoid entering parameters. In addition, a partition function is required to return the points after each sort.
3. In order to output the sorting of each trip, my idea is to use the traverse (print_out) Output in each recursion, but it is not the expected result. Because recursion runs the print_out function after each recursion, except for the previous several times that the structure can be seen, it is after sorting... So we still implement the output through the swap function.

Template <class record> int sortable_list <record>: partition (INT low, int high)/* pre: low and high are valid positions of the sortable_list, with low <= high. post: The Center (or left center) entry in the range between indices low and high of the sortable_list has been chosen as a listener. all entries of thesortable_list between indices low and high, lightweight, have been rearrangedso that chosen with keys less than the following com before the same Tand the remaining entries come after the hour. the final position of the operation is returned. uses: swap (INT, Int J) contigious list */{record failed; int I, // used to scan through the listlast_small; // position of the last key less than average tswap (low, (low + high)/2); Limit = entry [low]; last_small = low; for (I = low + 1; I <= high; I ++) // at the beginning of each iteration of this loop, we have the following conditions: // If low <j <= last_samll then entry [J]. key <strong // If last_small <j <I then entry [J]. key> = large. if (entry [I] <strong) {last_small = last_small + 1; swap (last_small, I); // move large entry to right and small to left} swap (low, last_small); // put the specified into its proper position. return last_small;} template <class record> void sortable_list <record >:: recursive_quick_sort (INT low, int high)/* pre: low and high are valid positions in the sortable list. post: the entries of the sortable_list have been rearranged so that their keys are sorted into nondecreasing order. uses: The contiguous list, recursive_quick_sort, partition. */{int effect_postion; If (low 

[Experiment results]

Heap sorting heap sorting stores the record in a table by a large heap (or a small heap), making it extremely easy to select the largest record. After each selection, the heap is upgraded. Implement sorting. Continue: [related experiments] 1. The above sortable_list is still used.
2. Compile the heap_sort () function. Follow the train of thought, you should first build a heap, then take the top element of the heap, and then re-build the heap for the remaining elements (upgrade the heap), so we need to compile the build_heap () function, the inster_heap function inserts elements into the heap one by one.
Finally, implement the heap_sort function.
3. We call traverse (print_sort) every time we insert a heap to output the sorting of each trip.

Template <class record> void sortable_list <record>: insert_heap (const record failed T, int low, int high)/* pre: the entries of the sortable_list between indices low + 1 and high, wide Sive, form a heap. the entry in position low will be discarded. post: the entry current has been inserted into the sortabl_list and the entries, rearranged so that the entries between indices low and high, inclusiveform a heap. */{int large; // position of child of entry [low] with the larger keylarge = 2 * low + 1; // large is now the left child of lowwhile (large <= high) {If (large <High & entry [large] <entry [large + 1]) Large ++; // large is now the child of low with the largest key. if (current> = entry [large]) break; // current belongs in position lowelse {// promote entry [large] and move down the tree entry [low] = entry [large]; Low = large; large = 2 * low + 1 ;}} entry [low] = current; traverse (print_out); cout <Endl ;}template <class record> void sortable_list <record> :: build_heap ()/* post: the entries of the sortable_list have been rearranged so that it becomes a heap uses: The contiguous list and insert_heap */{int low; for (Low = count/2-1; low> = 0; low --) {record current = entry [low]; insert_heap (current, low, Count-1 );}} template <class record> void sortable_list <record>: heap_sort ()/* post: the entries of the sortable_list have been rearranged so that their keys are sorted into nondecreasing order. uses: The contiguous list, build_heap, insert_heap */{record current; // temporary storage for moving entriesint last_unsorted; // entries beyond last_unsorted have been sorted. build_heap (); // First phase: Turn the list into a heapfor (last_unsorted = count-1; last_unsorted> 0; last_unsorted --) {current = entry [last_unsorted]; // extract the last entry from the listentry [last_unsorted] = entry [0]; // move top of heap to the endinsert_heap (current, 0, last_unsorted-1 ); // restore the heap} // for other functions, see the source code.
[Experiment results] Result Analysis [Hill sorting]

1. Hill sorting is an improvement of direct insertion, which greatly improves the efficiency of direct insertion sorting.
Direct insertion and sorting can only move a record to one position at a time, that is, the incremental increment is 1, while the increase volume is large at the beginning of the hill sorting, the number of groups is large, and the number of records in each group is small, therefore, direct insertion is faster in each group. After the increment decreases, the number of groups decreases and the number of records increases. However, because the increment is too large, the table is closer to the Order State, the sorting of the new trip is also fast.
2. In the experiment, the subtable is sorted by defining a new table and then calling the insert function directly. However, this method is not highly efficient and wastes space. It is better to sort directly inserted sub-tables.
3. The sort complexity of Hill is: O (nlog2n) d = 1. The sort is basically the same as that of direct insert.
4. Hill sorting is an unstable sorting algorithm, that is, the order of equal elements may change [Merge Sorting]

1. it is more reasonable to use chained tables in the implementation of Merge Sorting, because new tables need to be defined in the merge, even if we delete them through dynamic definition to save unnecessary space, this work is still time-consuming. The chain table only returns a pointer to operate on the node <node_entry> * next of the node without moving data. But at the same time, the use of chained tables also requires familiarity with pointers, which is prone to errors. It is often clearer to draw a picture before coding.
2. complexity of Merge Sorting: O (nlog2n)
3. Merge Sorting is a stable sorting algorithm, that is, the order of equal elements will not change [Fast sorting]

1. Complexity: The best O (nlog2n) and the worst O (n2)
2. The worst case of fast sorting is based on the choice of the principal component for each division. Select the first element as the principal element for basic quick sorting. In this way, when the array is ordered, the worst result is obtained for each division. A common optimization method is the randomization algorithm, that is, randomly selects an element as the principal component. In this case, although the worst case is O (n ^ 2), the worst case is no longer dependent on input data, but the random function value is not good.
3. Fast sorting is an unstable Sorting Algorithm [heap sorting]

1. A very important operation to achieve heap sorting is to build a heap.
To adjust the initial table to a large root heap, you must change the child tree corresponding to the full binary tree with each node as the root to the heap. Obviously, a tree with only one node is a heap. In a Complete Binary Tree, all the nodes with a serial number greater than n/2 are leaves. Therefore, the subtree with these nodes as the root is a heap. In this way, we only need to sequentially set the serial number to n/2 ,..., You can change the node 1 as the root subtree to heap.
2. Heap [sorting time, mainly composed of the time overhead for establishing the initial heap and rebuilding the heap repeatedly. The worst time complexity of heap sorting is O (nlog2n ).
3. Heap sorting is an unstable sorting algorithm.
4. Because the initial heap requires a large number of comparisons, the heap sorting is not suitable for files with a small number of records.
Heap sorting directly selects the largest element at the top of the heap, which is very similar to the selected sorting, but they are different, and the heap sorting performance is superior. Because in the selection sorting, in order to select the smallest record from the table, it is necessary to compare n-1 times, and then select the record with the smallest keyword in the remaining table, and N-2 comparison is required. In fact, many comparisons in the next N-2 comparison may have been done in the previous n-1 comparison, but since these comparison results were not retained in the previous sort, therefore, these comparison operations are repeated during the next sorting. Partial comparison results can be saved in a tree structure to reduce the number of comparisons. (Reprinted please indicate the author and Source: http://blog.csdn.net/xiaowei_cqu is not allowed for commercial use)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.