This article mainly for you to introduce the Python implementation of the eight ranking algorithm of the second, with a certain reference value, interested in small partners can refer to

In this article, a blog Python implementation of the eight sorting algorithm part1, will continue to use Python to achieve the remaining four of the eight sorting algorithm: fast sorting, heap sorting, merge sort, base sort

**5. Quick Sort**

Fast sorting is generally considered to be the best average performance in a sorting method of the same order of magnitude (O (nlog2n)).

Algorithm idea:

You know a set of unordered data a[1], a[2] 、...... a[n], you need to sort them in ascending order. Take data a[x] as a benchmark first. Compare A[x] with other data and sort, so that a[x] is ranked in the K-bit of the data, and each data in a[1]~a[k-1] is <a[x],a[k+1]~a[n] >a[x], and then the strategy of the division is respectively A[1]~a[k-1] and A[k+1]~a[n] Two sets of data for quick sorting.

Advantages: Very fast, less data movement;

Cons: Unstable.

Python Code implementation:

def quick_sort (list): little = [] pivotlist = [] large = [] # recursive exit If Len (list) <= 1: return li St Else: # The first value as datum pivot = list[0] for i in list: # places a value smaller than the baseline into the less sequence if I < pivot: Little.append (i) # Place a value larger than the benchmark into the more sequence elif i > Pivot: large.append (i) # Save the same value as the Datum in the Datum series else: pivotlist.append (i) # continue fast ordering of less and more columns little = Quick_sort (little) large = Quick_ Sort (Large) return little + pivotlist + large

The following code, from the three lines of the second version of Python Cookbook, enables Python to sort quickly.

#!/usr/bin/env python#coding:utf-8 "File:python-8sort.pydate:9/1/17 9:03 Amauthor:lockeyemail:lockey@123.comdesc :p Ython implements eight sorting algorithms ' LST = [65,568,9,23,4,34,65,8,6,9]def quick_sort (list): If Len (list) <= 1: return list else: pivot = list[0] return Quick_sort ([x for X in list[1:] if x < pivot]) + \ [pivot] + \ Quick_ Sort ([x for X in list[1:] if x >= pivot])

To run the test results:

Well, there's a leaner syntax sugar, and a line is done:

Quick_sort = Lambda xs: ((Len (XS) <= 1 and [XS]) or [Quick_sort ([x for X in xs[1:] if x < xs[0]]) + [xs[0]] + q Uick_sort ([x for X in xs[1:] if x >= xs[0])]) [0]

If the initial sequence is ordered or basically ordered by key code, the fast sort is degenerate to bubble sort instead. In order to improve it, we usually select the Datum record by "Three take Chinese Method", and adjust the center of the two endpoint of the sorting interval and the midpoint three record key code to the pivot record. Fast sorting is an unstable sort method.

In the improved algorithm, we will only recursively call the sequence of subsequence with length greater than K, make the original sequence basically orderly, and then sort the whole basic ordered sequence with the insertion sort algorithm. It is proved by practice that the time complexity of the improved algorithm is decreased, and the performance of the improved algorithm is the best when the K value is about 8.

**6. Heap sort (heap sort)**

Heap sorting is a sort of tree selection, which is an effective improvement on direct selection sorting.

Advantages: High Efficiency

Cons: Unstable

Heap definition: A sequence with n elements (h1,h2,..., HN), when and only if satisfied (hi>=h2i,hi>=2i+1) or (hi<=h2i,hi<=2i+1) (i=1,2,..., N/2) is called a heap. Only the heap that satisfies the former condition is discussed here. As can be seen from the definition of a heap, the top element of the heap (that is, the first element) must be the largest (large top heap). A fully binary tree can represent the structure of a heap visually. Heap top is the root, the other is Zuozi, right subtree.

Algorithm idea:

The sequence of the numbers to be sorted is initially treated as a two-fork tree that is stored sequentially, adjusting their storage order to become a heap, when the heap has the largest number of root nodes. The root node is then exchanged with the last node of the heap. The number of fronts (n-1) is then re-adjusted to make it a heap. And so on, until there are only two nodes of the heap, and exchange them, and finally get an ordered sequence of n nodes. In terms of algorithm description, heap sequencing requires two processes, one is to build the heap, and the other is the last element of the heap to exchange the position. So the heap sort has two functions. One is to build the seepage function of the heap, and the second is to call the function of the infiltration function to realize the sorting.

Python Code implementation:

#-*-Coding:utf-8-*-"Created on September 2, 2017 running environment:win7.x86_64 Eclipse python3@author:lockey" ' lst = [65,5 68,9,23,4,34,65,8,6,9]def adjust_heap (lists, I, size): # adjustment heap lchild = 2 * i + 1;rchild = 2 * i + 2 max = i if I < size/2: if lchild < size and Lists[lchild] > Lists[max]: max = Lchild if rchild < size and Li Sts[rchild] > Lists[max]: max = rchild if max! = I: Lists[max], lists[i] = Lists[i], Lists[max] adjust _heap (lists, Max, size) def build_heap (lists, size): # Create heap halfsize = Int (SIZE/2) for I in range (0, halfsize) [::- 1]: adjust_heap (lists, I, size) def heap_sort (lists): # heap Sort size = Len (lists) build_heap (lists, size) For i in range (0, size) [:: -1]: lists[0], lists[i] = Lists[i], lists[0] adjust_heap (lists, 0, i) Print (lists)

Examples of results:

**7. Merge sort**

Algorithm idea:

Merging (merge) sorting is an efficient sorting algorithm based on merging operations, which is a typical application of the method of division and treatment. The ordered Subsequence is merged to obtain a fully ordered sequence, i.e., the order of each subsequence is ordered, and then the sequence of sub-sequences is ordered. If two ordered tables are combined into an ordered table, they are called two-way merging.

The merge process is: Compare the size of a[i] and a[j], if A[I]≤A[J], then the first ordered table elements a[i] copied to r[k], and I and K plus 1; otherwise, the second ordered TABLE element A[j] copied to R[k], and the J and K respectively add 1, This loop continues until one of the ordered tables is finished, and then the remaining elements in the other ordered table are copied to the cells in R from subscript K to subscript t. Merging sorting algorithm we usually use recursive implementation, first to sort the interval [s,t] to the midpoint of the two points, then the left sub-range, then the right sub-range is sorted, and finally the left and right intervals with a merge operation into an orderly interval [s,t].

#-*-Coding:utf-8-*-"Created on September 2, 2017 running environment:win7.x86_64 Eclipse python3@author:lockey" ' lst = [65,5 68,9,23,4,34,65,8,6,9]def merge (left, right): i, j = 0, 0 result = [] When i < Len (left) and J < Len (ri ght): if left[i] <= right[j]: result.append (left[i]) i + = 1 else: result.append (Right[j]) J + = 1 result + = Left[i:] result + = Right[j:] print (result) return resultdef merge_sort (lists) : # merge Sort If Len (lists) <= 1: return lists num = int (len (lists)/2) Left = Merge_sort (Lists[:num]) C17/>right = Merge_sort (lists[num:]) return merge (left, right)

Examples of program results:

**8, bucket sort/base sort (Radix sort)**

Advantages: Fast, efficiency best to reach O (1)

Disadvantages:

1. First, the space complexity is higher and the additional overhead is required. Sorting has two of the space cost of the array, one for the array to be sorted, and one is the so-called bucket, such as the value to be sorted from 0 to m-1, then need M bucket, this bucket array will be at least m space.

2. The next element to be sorted is within a certain range and so on.

Algorithm idea:

is to divide the array into a finite number of buckets. Each bucket is sorted separately (it is possible to use a different sorting algorithm or to sort by using the bucket sort recursively). Bucket sequencing is an inductive result of pigeon nest sorting. When the values in the array to be sorted are evenly distributed, the bucket sort uses linear time (Θ (n)). But the bucket sort is not a comparison sort, and he is not affected by the O (n log n) lower bound.

Simply put, the data is grouped, placed in a bucket, and then the inside of each bucket is sorted.

For example, to sort n integers in the [1..1000] range of size A[1..N]

First, you can set the bucket size to 10, so there are 100 buckets, specifically, set the set b[1] store [1..10] integer, set b[2] Storage (10..20] of the integer, ... Set B[i] Store ((i-1) *10, i*10] integer, i =,.. 100. There are a total of 100 barrels.

Then, scan the A[1..N] from beginning to end, and put each a[i] into the corresponding bucket b[j]. Then the 100 barrels in each bucket in the number of sorting, then can be bubbling, selection, and even fast, in general, any sort method can be.

Finally, the numbers in each bucket are output sequentially, and the numbers in each bucket are output from small to large, so that a sequence of all the numbers is ordered.

Suppose there are n numbers, there are m buckets, and if the numbers are evenly distributed, there is an average number of n/m in each bucket. If

The number in each bucket is quickly sorted, so the complexity of the whole algorithm is

O (n + M * N/m*log (n/m)) = O (n + nlogn-nlogm)

As seen from the above, when M approaches N, the sorting complexity of buckets is close to O (n)

Of course, the calculation of the above complexity is based on the assumption that the input n numbers are evenly distributed. This hypothesis is very strong, the actual application of the effect is not so good. If all the numbers fall into the same bucket, it will degenerate into a general sort.

Python Code implementation:

#-*-Coding:utf-8-*-' Created on September 2, 2017 running environment:win7.x86_64 Eclipse Python3@author:lockey ' Import mat Hlst = [65,56,9,23,84,34,8,6,9,54,11] #因为列表数据范围在100以内, so 10 buckets will be used to sort def radix_sort (lists, radix=10): k = Int ( Math.ceil (Math.log (max (lists), radix))) bucket = [[] for I in range (radix)] for I in range (1, k+1): For J in L ists: gg = Int (j/(radix** (i-1)))% (radix**i) Bucket[gg].append (j) del lists[:] for z in buckets: Lists + = Z del z[:] print (lists) return lists

Program Run test results: