The merge sort is run at the worst-case run time of O (Nlogn), and the number of comparisons used is almost optimal. It is a good example of recursive algorithms.
The merging sort also follows the thought of divide and conquer. Intuitively, it operates as follows:
decomposition : decomposes the sequence of n elements to be sorted into a subsequence of each N/2 element.
workaround : Sort two sub-sequences recursively by using merge sort.
Merge : Merges two sorted sub-sequences to produce sorted answers.
When the sequence length is 1 o'clock, recursion "starts to pick up", in which case do not do any work, because each sequence of length 1 is ordered.
The key operation of the merge sort algorithm is the merging of two sorted sequences in the consolidation step. We complete the merge by invoking a secondary process merge (A,p,q,r), where a is an array and p,q,r is an array subscript that satisfies the p≤q
The idea of the merge process is as follows: Take two sorted input arrays a[p,..., Q] and a[q+1,..., R] and a temporary output array B, i=p, which represents the index of A[p,..., Q], j=q+1, which represents the index of a[q+1,..., R]; k= 0, which represents the index of the temporary output array B. A[i] and B[j] are copied to the next position in B, and the relevant counter advances one step forward. When one of the two input tables is exhausted, the remainder of a table is copied to C.
The code is as follows:
voidMerge (int*,int,int,int);voidMergeSort (intA[],intPintr) {if(P <r) {intQ = (p + r)/2; MergeSort (A, p, q); MergeSort (A, Q+1, R); Merge (A, p, Q, R); }}voidMerge (intA[],intPintQintr) {int*b =New int[R-p +1]; inti = p, j = q +1, k =0; while(I <= q && J <=r) {if(A[i] <=A[j]) {B[k+ +] = a[i++]; } Else{b[k+ +] = a[j++]; } } while(I <=q) {b[k+ +] = a[i++]; } while(J <=r) {b[k+ +] = a[j++]; } for(k =0; K < R-p +1; k++) A[p+ K] =B[k]; Delete[]b;}
Although the run time of the merge sort is O (Nlogn), it is difficult to use for main memory sorting, the major problem is that the merging of two sorted tables requires linear additional memory, and in the whole algorithm it is necessary to copy the data to the temporary array and then copy back such additional work, the result of which is to seriously slow down the sorting speed.
The run time of a merge sort depends heavily on the time it takes to compare and move elements in the array compared to other O (Nlogn) sorts. These costs are related to programming languages.
For example, in other languages, such as Java, when you sort a generic object, the elements are much more time consuming, but moving the elements is much faster. In all popular sorting algorithms, the merge sort uses a minimum number of comparisons. Therefore, in Java, merge sort is the best choice for general purpose ordering. In fact, the general sort in the standard Java library is the algorithm used.
On the other hand, in C + +, for a general sort, when the object is large, the cost of copying the object is large, and the object comparison is usually less expensive. This is because the compiler has a strong ability to perform online optimizations when working with templates for function templates.
"Classic algorithm" merge sort