Merge Sort & Quick Sort

Source: Internet
Author: User

Merge Sort

Merge sort is an effective sorting algorithm based on merging operation, which is a very typical application of the partition method (Divide and Conquer) .

The ordered Subsequence is merged to obtain a fully ordered sequence , i.e., the order of each subsequence is ordered, and then the sequence of sub-sequences is ordered. If you combine two ordered tables into an ordered table, it is called a two-way merge.

The merge process is:

Compare the size of a[i] and a[j], if A[I]≤A[J], then copy the element A[i in the first ordered table to R[k], and add 1 to the I and K respectively;

Otherwise, the elements in the second ordered table A[j] are copied to R[k], and J and K are added 1 respectively, so loop until one of the ordered tables is finished, and then copy the remaining elements from the other ordered table to the cells in R from subscript K to subscript t.

Merging sorting algorithm we usually use recursive implementation, first to sort the interval [s,t] to the midpoint of the two points, then the left sub-range, then the right sub-range is sorted, and finally the left and right intervals with a merge operation into an orderly interval [s,t].

================

1. Merge sort

  • Merge Sort

  • mergeSort.h :

  • #include <assert.h>

  • #include <stdio.h>

  • #include <stdlib.h>

  • void merge (int a[], int b[], int c[], int m, int n);

  • void mergesort (int key[], int n);

  • void wrt (int key[], int sz);

  • .

  • .

  • merge.c :

  • /* Merge a[] of size M and b[] of size n into c[]. */

  • #include "Mergesort.h"

  • void merge (int a[], int b[], int c[], int m, int n)

  • {

  • int i = 0, j = 0, k = 0;

  • ;

  • while (I < m && J < N)

  • if (A[i] < b[j])

  • c[k++] = a[i++];

  • Else

  • c[k++] = b[j++];

  • ;

  • while (I < m)/* Pick up any remainder */

  • c[k++] = a[i++];

  • ;

  • while (J < N)

  • c[k++] = b[j++];

  • }

  • .

  • .

  • mergesort.c :

  • /* Mergesort:use merge () to sort an array of size n. */

  • #include "Mergesort.h"

  • void mergesort (int key[], int n)

  • {

  • Int J, K, M, *w;

  • for (m = 1; m < n; m *= 2)

  • ; /* M is a power of 2 */

  • if (n < m) {

  • printf ("Error:array size not a power of 2-bye! \ n ");

  • Exit (1);

  • }

  • w = calloc (n, sizeof (int)); /* Allocate Workspace */

  • ASSERT (W! = NULL); /* Check that calloc () worked */

  • for (k = 1; k < n; k *=2)

  • {

  • for (j = 0; J < n-k; J + = 2*k)

  • /* Merge subarrays of key[] into a subarray of w[]. */

  • Merge (key + J, Key+j+k, W+j, K, K)

  • for (j = 0; J < N; ++j)

  • KEY[J] = W[j]; /* Write w back to key */

  • }

  • Free (w); /* Free the workspace * *

  • }

  • .

  • .

  • main.c :

  • /* Test merge () and MergeSort (). */

  • #include "Mergesort.h"

  • int main (void)

  • {

  • int sz, key[] = {4, 3, 1, 67, 55, 8, 0, 4,

  • -5, 37, 7, 4, 2, 9, 1, 1

  • };

  • SZ = sizeof (key)/sizeof (int); /* The size of key[] */

  • printf ("Before mergesort:\n");

  • WRT (key, SZ);

  • MergeSort (key, SZ);

  • printf ("After mergesort:\n");

  • WRT (key, SZ);

  • return 0;

  • }

  • .

  • .

  • wrt.c :

  • #include "Mergesort.h"

  • void wrt (int key[], int sz)

  • {

  • int i;

  • for (i = 0; i < sz; ++i)

  • printf ("%4d%s", Key[i], ((I < sz-1)? "": "\ n"));

  • }

================

Quick Sort

Quick Sort (Quicksort) is an improvement to the bubbling sort.

Basic idea:--two points to find

By sorting the sorted data into separate two parts, one part of all data is smaller than the other part of the data, and then the two parts of the data are quickly sorted by this method, the entire sorting process can be recursive, so as to achieve the entire data into an ordered sequence.

Quick Sort Chart

1. Set the array to sort by a[0] ... A[n-1]. start by selecting one of the data (usually the first number of the array) as the key data , and then put all the smaller numbers in front of it , and all the larger numbers are placed behind it . This process is called a quick sort of a trip .

2. Fast sorting is not a stable sorting algorithm, which means that the relative position of multiple identical values may change at the end of the algorithm.

================

2. Quick Sort

  • Quick Sort

  • /* quicksort! Pointer version with macros. */

  • #define SWAP (x, y) {int t; t = x; x = y; y = t;}

  • #define ORDER (x, Y) if (x > Y) Swap (x, y)

  • #define O2 (x, y) Order (x, Y)

  • #define O3 (x, Y, z) O2 (x, y); O2 (x, z); ow (Y, z)

  • #typedef enum {yes, no} yes_no;

  • static yes_no find_pivot (int *left, int *right, int *pivot_ptr);

  • static int *partition (int *left, int *right, int pivot);

  • .

  • .

  • .//using "recursive" realization, the basic idea: "Divide and Conquer Law" quicksort (A, a+n-1);

  • void quicksort (int *left, int *right)

  • {

  • int *p, pivot;

  • if (Find_pivot (left, right, &pivot) = = yes) {

  • p = partition (left, right, pivot);

  • Quicksort (left, p-1);

  • Quicksort (P, right);

  • }

  • }

  • .

  • .

  • static yes_no find_pivot (int *left, int *right, int *pivot_ptr)

  • {

  • int A, B, C, *p;

  • A = *left; /* Left value */

  • b = * (left + (Right-left)/2); /* Middle Value */

  • c = *right;

  • ;

  • O3 (A, B, c);

  • if (a < b) {

  • *pivot_ptr = b;

  • return yes;

  • }

  • if (b < c) {

  • *pivot_ptr = C;

  • return yes;

  • }

  • ;

  • for (p = left+1, p <= right; ++p)

  • if (*p! = *left) {

  • *pivot_ptr = (*p < *left)? *left: *p;

  • return yes;

  • }

  • return no; /* All elements has the same value * *

  • }

  • .

  • .

  • .//The main work is done by the partation () function

  • static int *partation (int *left, int *right, int pivot)

  • {

  • while (left <= right) {

  • while (*left < pivot)

  • ++left;

  • while (*right >= pivot)

  • --right;

  • if (left < right) {

  • Swap (*left, *right);

  • ++left;

  • --right;

  • }

  • }

  • return left;

  • }

  • .

  • .

  • Ex

  • Use "quick" sorting, high efficiency, complexity: N log n

================

Ps:RecursiveThe recursive algorithm is generally used to solve three kinds of problems: (1) The definition of data is defined by recursion. ( Fibonacci Function) (2) The problem solution is implemented by recursive algorithm. Although there is no obvious recursive structure in this kind of problem, it is simpler to solve it by recursion than iterative solution, such as Hanoi problem. (3) The structural form of the data is defined by recursion. such as binary tree, generalized table, etc., due to the inherent recursive nature of the structure, their operations can be recursively described.Disadvantages of recursion:Recursive algorithm is a relatively common algorithm for solving problems, such as normal cycle,Low Operating efficiency。 Therefore, recursion should be avoided as much as possible, unless there is no better algorithm or a specific situation where recursion is more appropriate. in the process of recursive invocation, the system opens up stacks for each layer's return point, local quantity, etc. to store。 Too many recursion times can cause stack overflow and so on. Typical recursive problems: The question of the Vatican ( Hanoi Tower Problem) known to have three needles with a, B, C, in a from top to bottom in order to put n from small to large plate, is required to all the plate from a pin to the B-pin, the move rule is: You can use c temporary storage plate, each can only move a plate, and each needle can not appear on the market pressure plate, Find the scenario with the least number of moves.

================

Ps:

[ one sentence per day ]

There ' s a plan to make all of the.

[ an English song every day ]

"Call Me Maybe"-Carly Rae Jepsen

================

|-> GitHub: spongebob-github

|--> Copyright (c) Bing Ma.

Merge Sort & Quick Sort

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.