Python implementation of basic sorting algorithm

Source: Internet
Author: User

This article mainly implements the nine (eight) large sorting algorithm, namely bubble sort, insert sort, select Sort, hill sort, merge sort, quick sort, heap sort, count sort. I hope you can also get help from this article when you review your knowledge.

In order to prevent misleading readers, all the conceptual content of this article is intercepted from the corresponding wiki

Bubble Sort Principle

Bubble sort (Bubble sort) is a simple sort algorithm. It repeatedly visited the sequence to sort, comparing two elements at a time, and swapping them out if they were wrong in the order. The work of the sequence of visits is repeated until no more need to be exchanged, that is, the sequence is sorted. The algorithm is named because the smaller elements will slowly "float" through the switch to the top of the sequence.

Steps

The bubbling sorting algorithm works as follows:

    1. Compares the adjacent elements. If the first one is bigger than the second one, swap them both.

    2. Do the same for each pair of adjacent elements, starting with the last pair from the first pair to the end. When this is done, the final element will be the maximum number.

    3. Repeat the above steps for all elements, except for the last one.

    4. Repeat the above steps each time for fewer elements, until there are no pairs of numbers to compare.

Code
Bubble_sort(list):    length = len (list)    # First-level traversal in    range (return list   

This sort can actually be slightly optimized, adding a tag that stops sorting when the sort is complete.

Bubble_sort_flag(list):    length = len (list) in    range (return list  
Select Sort principle

Select sort (Selection sort) is a simple and intuitive sorting algorithm. It works roughly by taking the smallest elements of the following elements out one after the other and placing them sequentially.

Steps
    1. Finds the smallest (large) element in an unordered sequence, and holds it to the starting position of the sort sequence,

    2. Then continue looking for the smallest (large) element from the remaining unsorted elements and place it at the end of the sorted sequence.

    3. Repeat the second step until all the elements are sorted.

Code
Selection_sort(list):    N=len (list) in   range (i+return list   
Insert Sort principle

Insert sort (Insertion sort) is a simple and intuitive sorting algorithm. It works by constructing an ordered sequence, for unsorted data, to scan from backward forward in the sorted sequence, to find the appropriate position and insert it.

Steps
    1. Starting with the first element, the element can be thought to have been sorted

    2. Takes the next element and scans the sequence of elements that have been sorted from back to forward

    3. If the element (sorted) is greater than the new element, move the element to the next position

    4. Repeat step 3 until you find the sorted element is less than or equal to the position of the new element

    5. After inserting a new element into the position

    6. Repeat step 2~5

Code
DefInsert_sort(list): n = len (list) for I in Range (1, N): # After an element is compared to the previous element # If the previous small if list[i] < li St[i- 1]: # Take this number out of temp = List[i] # save Subscript index = i # Compare each element from back to forward  in order J in range (I- 1,- 1,-1): # and the element that is larger than the remove element Exchange if LIST[J] > temp:list[j + 1] = list[j] Index = j Else: Brea K # Insert element List[index] = temp return list           
Hill Sort principle

Hill sort, also called descending incremental sorting algorithm, is a more efficient and improved version of insertion sequencing. Hill Sort is a non-stable sorting algorithm.

The hill sort is based on the following two-point nature of the insertion sort, which proposes an improved method:

The insertion sort is efficient in the case of almost sequenced data operations, i.e. the efficiency of linear sequencing can be achieved

But the insertion sort is generally inefficient because the insertion sort can only move data one bit at a time.

Steps

Each time a certain step (that is, the number of skipped offsets) is sorted until the step is 1.

Code
Shell_sort(list):    n = len (list)    # Initial step    return list  

The step is recommended by The Donald Shell, and the step can also be made using Sedgewick (1, 5, +, 109,...).

You can also use the Fibonacci sequence to remove 0 and 1 from the remaining number to the twice-fold power of the golden partition ratio.

Merge sort principle

The merge operation (merging algorithm) refers to the operation of merging two sorted sequences into a sequence. The merge sort algorithm relies on the merge operation.

Step Iteration Method
    1. The space is applied to the sum of two sorted sequences, which are used to store the merged sequence

    2. Set two pointers where the initial position is the starting position of the two sorted series

    3. Compare the elements pointed to by two pointers, select a relatively small element into the merge space, and move the pointer to the next position

    4. Repeat step 3 until a pointer reaches the end of the sequence

    5. Copies all remaining elements of another sequence directly to the end of the merge sequence

Recursive method

Suppose a sequence has n elements:

    1. Merges each contiguous two digits of the sequence into a sequence of {Displaystyle floor (N/2)} floor (N/2) with two elements per sequence after ordering

    2. Merge the above sequence again to form {Displaystyle floor (N/4)} floor (N/4) sequence with four elements per sequence

    3. Repeat step 2 until all elements are sorted

Code
# recursive methodDefMerge_sort(list):# A sequence of no more than 1 is considered orderlyif len (list) <=  1: return list # two-point list middle = len (list)//2 left = Merge_sort (List[:middle]) right = merge_s ORT (List[middle:]) # last merge return merge (left, right) # merge def merge  (left, right): L,r=0,0 result=[] while l<len (left) and r<len (right): if left [l] <right[r]: result.append (left[l]) L+=1 else:result.append (right[ R]) r +=1 reslut +=left[l:] result+=right[r:] return result   /span>                

In my humble opinion, I do not know how the iterative method of merge sort can be realized with Python, hope to teach.

Quick Sort Principle

Quick sort uses the Divide and conquer strategy to divide a sequence (list) into two sub-sequences (sub-lists).

Steps
    1. Pick an element from the sequence called "Datum" (pivot),

    2. Reorder the columns, where all elements are placed in front of the datum in a smaller position than the base value, and all elements are larger than the base value behind the datum (the same number can be on either side). At the end of this partition, the datum is in the middle of the sequence. This is called partition (partition) operation.

    3. recursively (recursive) sorts sub-columns that are smaller than the base value elements and sub-columns that are larger than the base value elements.

Code

Normal edition

def quick_sort (list): less = [] Pivotlist = [] Mor e = [] # recursive exit if len (list) <=  1: return list else: # the first value as a datum pivot = List[0] for i in list: # place a value smaller than the quick turn to the less sequence if i < Pivot:less.append (i) # will be placed in the more series # save the same values as the Datum in the Datum series else:pivotlist.append (i) # the less and more columns continue to be sorted less = Quick_sort (less) more = Quick_sort (more)  Return less + pivotlist + more             

Cough, the following code from the "Python Cookbook second Edition" Legend of the three lines to achieve Python quick sorting.

Qsort(arr):    else:pivot = arr[inarr[inarr[if x >= pivot])    

And, of course, a line of syntactic sugar versions:

In xs[if x < xs[0]]) + [xs[inxs[if x >= xs[0]]) [0]     

Did you feel the charm of python?

Heap sorting principle

Heap ordering (heapsort) refers to a sort algorithm designed using the data structure of the heap. A heap is a structure that approximates a complete binary tree and satisfies the properties of the heap at the same time: that is, the key value or index of the child node is always less than (or greater than) its parent node.

Steps
    1. Create maximum heap: Reorder all the data in the heap, making it the largest heap

    2. Maximum heap adjustment: The function is to maintain the nature of the maximum heap, is the core subroutine to create the largest heap

    3. Heap sort: Removes bits at the root of the first data and makes the recursive operation of the maximum heap adjustment

Code
DefHeap_sort(list):# Create Maximum HeapFor startIn range (Len (list)-2)//2,-1,-1): Sift_down (list, start, Len (list)-1)# heap SortFor endIn range (Len list)-1,0,-1): list[0], List[end] = List[end], list[0] Sift_down (list, 0, end- 1) return list# max heap adjustment def sift_down(LST, start, end): root = St Art while true:child = 2 * root + 1 if child > End: break if child + 1 <= en D and Lst[child] < Lst[child + 1]: Child + + 1 if lst[root] < Lst[child]: Lst[root], lst[child] = Lst[child], lst[root] root = child else:           break 
Counting sorting principle

When the input element is an integer of n 0 to K, its run time is θ (n + k). The count sort is not a comparison sort, and the sort is faster than any comparison sort algorithm.

Because the length of the array C used to count depends on the range of data in the array to be sorted (equal to the difference between the maximum and minimum values of the array to be sorted plus 1), this makes the count sort for arrays with a large data range, which requires a lot of time and memory. For example, a count sort is the best algorithm for sorting numbers between 0 and 100, but it is not appropriate to sort names alphabetically. However, the count sort can be used in the cardinality sorting algorithm, which can be more efficient in sorting arrays with large data ranges.

Steps
    1. Find the largest and smallest elements in the array to be sorted

    2. Count the number of occurrences of each element in the array of I, stored in the item I of array C

    3. Summation of all counts (starting with the first element in C, each item and the previous item are added)

    4. Reverse-Populate the target array: Place each element I in the C (i) of the new array, subtract C (i) by 1 for each element placed

Code
DefCount_sort(list): min = 2147483647 max = 0 # Gets the maximum and minimum value for the x in list: if x < min:min = x i f x > Max:max = x # Create array c count = [0] * (max-min +1) for index in list:count[index-min] + = 1 index = 0 # value for A in range (max-min+1): For C in Range (Count[a]): List[inde X] = a + min index + = 1 return list
                  
Nineth sort of

None?

Of course not

Naturally, it comes with the system.

List.sort ()

All of the above source code is Https://github.com/EINDEX/Python-algorithm/tree/master/Sort

Python implementation of basic sorting algorithm

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.