Data Structure-Summary of various sorting algorithms [continued], data structure Algorithms

Source: Internet
Author: User

Data Structure-Summary of various sorting algorithms [continued], data structure Algorithms
Summary of various sorting algorithms 3. Exchange sorting [connected]

2. Quick sorting

Quick sorting refers to dividing the columns to be sorted into two parts by comparing key codes and switching records and taking a record as the boundary (this record is called the pivot point. Among them, the key code of some records is greater than or equal to the key code of the Fulcrum record, and the key code of the other part of all records is less than the key code of the Fulcrum record. We divide the columns to be sorted into two parts based on the key code, which is called a division. The parts are constantly divided until the entire sequence is ordered by key codes.

If the left-side sub-sequence of the element is the same as the length of the right-side sub-sequence after each division locates the element, the next step is to sort the two sub-sequences whose length is halved, this is the ideal situation!

 

[Algorithm]

// Pseudo code representation // a fast Sorting Algorithm: int Partition1 (Elem R [], int low, int high) {repeated tkey = R [low]. key; // use the first record of the sub-table as the pivot record while (low 

 

It is easy to see that the Pivot Position in the adjustment process is not important. Therefore, to reduce the number of movements of records, you should first "Remove" the pivot record ", after the position where the pivot record should be obtained (low = high at this time), record the pivot record in place.

Rewrite the "one-time division" algorithm as follows:

Int Partition2 (Elem R [], int low, int high) {R [0] = R [low]; // use the first record of the sub-table as the pivot record tkey = R [low]. key; // pivot record key word while (low 

// Recursive fast Sorting Algorithm: void QSort (Elem R [], int low, int high) {// pair record sequence R [low .. high] for fast sorting if (low 

[Performance Analysis]

(1) spatial efficiency: Fast sorting is recursive. pointers and parameters for each layer of recursive calling must be stored in stacks. The number of recursive calling layers is consistent with the depth of the preceding binary tree. Therefore, the storage overhead is O (log2n) Ideally, that is, the height of the tree. In the worst case, that is, the binary tree is a single chain and is O (n ).

(2) time efficiency: In the columns to be sorted for n records, a Division requires about n key code comparisons. The validity period is O (n). If T (n) is set) the time required to quickly sort the columns to be sorted for n records. Ideally, each division is divided into two sub-sequences of equal length.

 
T (n) ≤ cn + 2 T (n/2) c is a constant ≤cn + 2 (cn/2 + 2 T (n/4 )) = 2cn + 4 T (n/4) ≤ 2cn + 4 (cn/4 + T (n/8) = 3cn + 8 T (n/8) · ≤ cnlog2n + nT (1) = O (nlog2n)

 

It can be proved that the average calculation of QuickSort is also O (nlog2n ).

Worst case: Only one subsequence is obtained for each division. The validity period is O (n ^ 2 ).

Quick sorting is generally considered to have the best average performance in sorting methods of the same order of magnitude O (nlog2n. However, if the initial sequence is ordered by the key code or is basically ordered, the quick sequence is sorted by bubble instead. For improvement, the pivot record is usually selected based on "Three values take China and France", and the key code of the two endpoints and the midpoint of the sorting interval is adjusted to the pivot record.

(3) Fast sorting is an unstable sorting method.

(4) Worst Case: space complexity-> O (n), time complexity-> O (n ^ 2)

Average: space complexity-> O (log2n), time complexity-> O (nlog2n)

(5) quick sorting is suitable for the case where n is large.

 

4. Select class sorting

1. Select sort

Simple sorting is the simplest sorting method for selecting classes. Assume that the status of the sequence to be sorted is:


 

In addition, the keywords of all records in the ordered sequence are smaller than those recorded in the disordered sequence, the I-th simple selection sorting is from the disordered sequence R [I .. in n-I + 1 records of n], records with the smallest keyword are added to the ordered sequence.

Operation Method: First, find the records with the minimum key code from n records and exchange the records with 1st records; second, from the n-1 records starting from the second record, the minimum key code record and the 2nd record are selected for exchange, then, the records with the smallest key code are selected from the n-I + 1 records starting from the I record and exchanged with the I record until the entire sequence is ordered by the key code.

 

[Algorithm]

// C ++ code int selectMinIndex (int * A, int index, int length) {int min = index; for (int I = index + 1; I! = Length; ++ I) {if (A [I] <A [min]) {min = I;} return min;} void selectSort (int *, int length) {for (int I = 0; I! = Length; ++ I) {int k = selectMinIndex (A, I, length); if (k! = I) {int temp = A [I]; A [I] = A [k]; A [k] = temp ;}}}

[Performance Analysis]

(1) space efficiency: Only one auxiliary unit is used, and the space complexity is O (1 ).

(2) time efficiency: the best sorting and average time complexity are O (n ^ 2 ).

(3) Stability: different teaching materials have disputes over the stability of Simple selection and sorting. It is generally considered that the algorithm is stable if we compare the records that are smaller than I in the past and the future, if we select a record smaller than I from the back-to-front comparison, the algorithm is unstable.

 

2. Heap sorting

Heap sorting is characterized by using the keyword comparison results obtained in the first selection in the "selection" of different trips in the future.

Stack definition: a heap is a series of {r1, r2 ,..., Rn }:


If we think of this series as a Complete Binary Tree, the heap or empty tree or the Complete Binary Tree that meets the following features: Its left and right subtree are heap, respectively, when left/right subtree is not empty, the value of the root node is smaller than (or greater than) the value of the left/right subtree.

Therefore, if the above series is a heap, r1 must be the minimum or maximum value in the series, which are called a small top heap or a large top heap respectively.

Heap sorting is a sorting method that uses the heap feature to sort record sequences. The specific method is to set n elements and sort them by key codes. First, the n elements are heap based on the key code, and the top elements of the heap are output to obtain the elements with the minimum (or maximum) key code among the n elements. Then, build a heap for the remaining n-1 elements, output the top elements of the heap, and obtain the elements with a small (or a large) number of key-value pairs among the n elements. After this repetition, a sequence ordered by key code is obtained. This process is called heap sorting.

 

Therefore, two problems need to be solved to achieve heap sorting:

(1) how to build the sequence of n elements into a heap based on the key code.

Heap building method: the initial sequence heap process is a process of repeated screening. The Complete Binary Tree of n nodes, the last node is the child of n/2 nodes. Filter the subtree whose n/2 nodes are the root, make the subtree heap, and then filter the subtree whose nodes are the root in order to make it a heap, until the root node.

(2) how to adjust n-1-1 elements after the top element of the heap is output to make it a new heap based on the key code.

Adjustment Method: There is a heap with m elements. After the top element of the heap is output, the remaining m elements are added. Send the elements at the bottom of the heap to the top of the heap and the heap is damaged only because the root node does not meet the nature of the heap. Exchange the root node with a small (or small) child on the left and right. If it is exchanged with the left child, the left child tree heap is destroyed, and only the root node of the left child tree does not meet the nature of the heap. If it is exchanged with the right Child, the right child tree heap is destroyed, only the root node of the right subtree does not meet the heap nature. Continue to perform the aforementioned switch operation on Subtrees that do not meet the heap nature until the leaf node is built. The adjustment process from the root node to the leaf node is called filtering.

 

[Algorithm]

The heap sorting algorithm is as follows:

Void heapSort (Elem R [], int n) // heap the record sequence R [1. n. {For (I = n/2; I> 0; -- I) // set R [1 .. n] Build the HeapAdjust (R, I, n); for (I = n; I> 1; -- I) {R [1] hybrid → R [I]; // record the heap top and unsorted subsequence, R [1 .. the last record in I] exchanges HeapAdjust (R, 1, I-1); // convert R [1 .. i-1] re-adjusted to Big Top heap }}

 

The filtering algorithm is as follows. In order to change R [s .. m] To "Big Top Heap", "filter" in the algorithm should be performed down the child node with a large keyword.

Void HeapAdjust (Elem R [], int s, int m) {/* known R [s .. m] The record keyword except R [s]. except for the key, the heap definition is met. This function adjusts the key word of R [s] So that R [s .. m] becomes a big top heap (for the keywords recorded in it) */rc = R [s]; for (j = 2 * s; j <= m; j * = 2) // filter the child nodes with large keys downward. {if (j <m & R [j]. key <R [j + 1]. key) ++ j; // j is the subscript of the record with a large key if (rc. key> = R [j]. key) break; // rc should be inserted on location s R [s] = R [j]; s = j;} R [s] = rc; // insert}

 

[Performance Analysis]

(1) space efficiency: Only one auxiliary unit is used, and the space complexity is O (1 ).

(2) time efficiency:

① For the depth of k heap, "screening" needs to compare the number of keywords to 2 (k-1 );

② For n keywords, build a heap with a depth of h (= ë log2n Limit + 1). The number of times that the keywords to be compared is 4n;

③ Adjust the "heap top" N-1 times. The total number of keyword comparisons cannot exceed

2(log2(n-1)+ log2(n-2)+ …+log22)<2n(log2n)

Therefore, the average and worst time complexity of heap sorting are O (nlogn ).

(3) Heap sorting is an unstable sorting method.

 

2. Two-way Merge Sorting:

[Algorithm IDEA]

The basic idea of merging and sorting is to "merge" two or more ordered subsequences into an ordered sequence.

In the internal sorting, two-way Merge Sorting is usually used. That is, it refers to an ordered subsequence adjacent to two locations,


The space complexity is O (n), stability, and time complexity O (nlog2n)



[Algorithm]

// Pseudo code, not necessarily able to run void Merge (Elem SR [], Elem TR [], int I, int m, int n) {// order SR [I .. m] and sr [m + 1 .. n] merged into ordered TR [I .. n] for (j = m + 1, k = I; I <= m & j <= n; ++ k) // merge the records in SR from small to large into TR {if (SR [I]. key <= SR [j]. key) TR [k] = SR [I ++]; else TR [k] = SR [j ++];} if (I <= m) TR [k .. n] = SR [I .. m]; // Add the remaining SR [I .. m] Copy to TR if (j <= n) TR [k .. n] = SR [j .. n]; // returns the remaining SR [j .. n] copying to TR}

 

The Merge Sorting Algorithm can be recursive or recursive, which is derived from two different programming ideas.

Here, we only discuss recursive algorithms. This is a top-down analysis method: If the unordered sequence R [s .. t] two sections of R [s .. second (s + t)/2 Second] and R [second (s + t)/2 + 1 .. t branch] According to the keyword order, it is easy to use the above merge algorithm to merge them into the entire record sequence is an ordered sequence. Therefore, the two parts should be sorted by two-way merging.

Void Msort (Elem SR [], Elem TR1 [], int s, int t) {if (s = t) TR1 [s] = SR [s]; else {m = (s + t)/2; // set SR [s .. t] evenly divided into SR [s .. m] and sr [m + 1 .. t] Msort (SR, TR2, s, m); // recursively convert SR [s .. m] merged into an ordered TR2 [s .. m] Msort (SR, TR2, m + 1, t); // recursive SR [m + 1 .. t] merge into ordered TR2 [m + 1 .. t] Merge (TR2, TR1, s, m, t); // set TR2 [s .. m] And TR2 [m + 1 .. t] merge to TR1 [s .. t]} void MergeSort (Elem R []) // pair record sequence R [1 .. n. {MSort (R, R, 1, n );}

 

[Performance Analysis]

(1) space efficiency: an array of auxiliary elements, such as tables, is required. Therefore, the space complexity is O (n ).

(2) time efficiency: For tables with n elements, the n elements are considered as leaf nodes. If the child tables generated by the two elements are considered as their parent nodes, the merging process corresponds to the process of generating a binary tree from the leaf to the root. Therefore, the number of shards to be merged is approximately equal to the height of the Binary Tree-1, that is, log2n. Each merge operation requires n records to be moved, so the time complexity is O (nlog2n ).

(3) Stability: Merge Sorting is a stable sorting method.


Data structure C language-implement various sorting algorithms

Just finished
# Include <iostream>
Using namespace std;

Void BiInsertsort (int r [], int n) // insert sort (half)
{
For (int I = 2; I <= n; I ++)
{
If (r [I] <r [I-1])
{
R [0] = r [I]; // sets the sentry
Int low = 1, high = I-1; // half Lookup
While (low <= high)
{
Int mid = (low + high)/2;
If (r [0] <r [mid]) high = mid-1;
Else low = mid + 1;
}
Int j;
For (j = I-1; j> high; j --) r [j + 1] = r [j]; // post shift
R [j + 1] = r [0];
}
}
For (int k = 1; k <= n; k ++) cout <r [k] <"";
Cout <"\ n ";
}

Void ShellSort (int r [], int n) // sort by hill
{
For (int d = n/2; d> = 1; d = d/2) // sort the values in increments of d.
{
For (int I = d + 1; I <= n; I ++)
{
R [0] = r [I]; // Save the inserted records
Int j;
For (j = I-d; j> 0 & r [0] <r [j]; j = j-d) r [j + d] = r [j]; // record d locations after moving
R [j + d] = r [0];

}
}
For (int I = 1; I <= n; I ++) cout <r [I] <"";
Cout <"\ n ";
}

Void BubbleSort (int r [], int n) // Bubble Sorting
{
Int temp, exchange, bound;
Exchange = n; // the range of the first Bubble Sorting is r [0] To r [n-1]
While (exchange) // This sorting is performed only when there is a record exchange in the previous sorting.
{
Bound = exchange;
Exchange = 0;
For (int j = 1; j <bound; j ++) // a Bubble Sorting
If (r [j]> r [j + 1])
{
Temp = r [j];
R [j] = r [j + 1];
R [j + 1] = temp;
Exchange = j;... the remaining full text>

Detailed description of various sorting algorithms in the data structure ,,,,,,,,,,

Sorting algorithms include insert sorting, exchange sorting, select sorting, and merge sorting.
Insert sorting includes direct insert sorting and Shell sorting. Switch sorting includes Bubble sorting and differentiation switching sorting. Selection sorting includes direct selection sorting and heap sorting.
Among these sorting algorithms, the average time complexity of directly inserting sorting, Bubble sorting, and directly selecting sorting is O (n square ); the average time complexity of the differentiation, switching, sorting, heap sorting, and merge sorting algorithms is




Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.