8 large sorting algorithm graphic explanation

Source: Internet
Author: User
Keywords Sort algorithm algorithm insert step complex
Tags access array continue data distributed exchange external graphic

The sorting algorithm can be divided into internal sorting and external sorting. The internal sorting is to sort the data records in the memory, while the external sorting is because the sorting data is very large, so it can not hold all the sorting records at a time. In the sorting process, it needs to access the external memory.

Common internal sorting algorithms are: Insert sort, Hill sort, Select sort, Bubble sort, Merge sort, Quick sort, Heap sort, Base sort and so on.

This article will introduce the above eight sorting algorithm in turn.

Algorithm One: insert sort

Insert sorting diagram

Insertion sorting is one of the simplest and most intuitive sorting algorithms. It works by constructing an ordered sequence, scanning unsorted data from back to front in a sorted sequence, finding the corresponding position, and inserting it.

Algorithm steps:

1) treat the first element of the first to-be-ordered sequence as an ordered sequence and the second element to the last element as an unsorted sequence.

2) scan the unsorted sequence from start to finish, and insert each element scanned into the proper sequence of the ordered sequence. (If the element to be inserted is equal to an element in the ordered sequence, the element to be inserted is inserted after the equal element.)

Algorithm two: Hill sort

Hill sorting diagram

Hill sort, also known as incremental incremental sorting algorithm, is a more efficient and improved version of sort ordering. However, Hill sorting is an unstable sorting algorithm.

Hill sorting is based on the insertion of two nature of the proposed improvement and methods:

Insert Sort In the operation of almost aligned data, high efficiency, you can achieve the efficiency of linear sorting

However, insert sort is generally inefficient, because insert sort can only move the data one at a time

The basic idea of ​​Hill sorting is as follows: First, divide the whole record sequence to be sorted into several sub-sequences and insert them directly, respectively. When the records in the entire sequence are "basically ordered," then all the records are inserted and sorted in turn.

Algorithm steps:

1) Select an incremental sequence t1, t2, ..., tk where ti> tj, tk = 1;

2) according to the number of incremental sequence k, the sequence of k times sorting;

3) Each pass, according to the corresponding increment ti, the to-be-sorted column is divided into a number of sub-sequences of length m, and the sub-tables are respectively directly inserted and sorted. Only incremental factor of 1, the entire sequence as a table to deal with, the table length is the length of the entire sequence.

Algorithm three: choose sort

Choose the sorting diagram

Selection sort is also a simple and intuitive sorting algorithm.

Algorithm steps:

1) first find the smallest (large) element in the unordered sequence, store it in the beginning of the sorted sequence

2) Then continue looking for the smallest (large) element from the remaining unsorted elements and place it at the end of the sorted sequence.

3) Repeat the second step until all the elements are sorted.

Algorithm 4: bubble sort

Bubble sorting diagram

Bubble Sort is also a simple and intuitive sorting algorithm. It repeatedly touches the sequence to be sorted, comparing two elements at a time and swapping them in the wrong order. The work of visiting the sequence is repeated until there is no need to exchange, which means that the sequence has been sorted. The reason for this algorithm is that the smaller elements slowly "float" to the top of the sequence via exchange.

Algorithm steps:

1) Compare adjacent elements. If the first one is bigger than the second, swap them both.

2) Do the same for each pair of neighbors, starting with the first pair and ending with the last pair. After this step is done, the final element will be the largest number.

3) Repeat the above steps for all the elements except the last one.

4) Continue to repeat the above steps for fewer and fewer elements each time until there is no pair of numbers to compare.

Algorithm five: merge sort

Merge sorting diagram

Merge sort is an efficient sort algorithm built on merge operations. The algorithm is a very typical application using divide and conquer (http://www.aliyun.com/zixun/aggregation/3736.html"> Divide and Conquer).

Algorithm steps:

1. Apply for space to be the sum of two already sorted sequences that hold the merged sequence

2. Set two pointers, the initial position of the two have been sequenced to the starting position

3. Compare the two pointers to the element, select the relatively small elements into the merge space, and move the pointer to the next location

4. Repeat step 3 until a pointer reaches the end of the sequence

5. Copy all the remaining elements of the other sequence directly to the end of the merge sequence

Details: merge sort

Algorithm 6: Quick Sort

Quickly sort the diagram

Quick sort is a sort algorithm developed by Tony Hall. In the average condition, sorting n items to Ο (n log n) comparison. In the worst case it takes 0 (n2) comparisons, but this is not the case. In fact, fast sorting is usually significantly faster than other n log n algorithms because its inner loop can be efficiently implemented on most architectures.

Quick Sort uses the Divide and Conquer strategy to split a list into two sub-lists.

Algorithm steps:

1 Pick an element from the sequence, called the "pivot"

2 Rearrange the sequence, all elements placed before the datum are smaller than the datum, all elements behind the datum larger than the datum (the same number can be either side). After the partition is exited, the benchmark is in the middle of the series. This is called a partition operation.

3 Recursively sort subsequences less than the reference value element and subsequences greater than the reference value element.

The bottom case of recursion is that the size of a series is zero or one, which is always sorted. Although always recursive, the algorithm always exits because at each iteration it will put at least one element in its last position.

Details: Quick Sort

Algorithm seven: heap sort

Heap sorting diagram

Heapsort is a sort algorithm designed with heap data structures. Stacking is a structure that approximates a complete binary tree, and at the same time satisfies the nature of stacking: the key or index of a child is always less than (or greater than) its parent.

The average time complexity of heap ordering is Ο (nlogn).

Algorithm steps:

1) Create a heap H [0..n-1]

2) Heap (maximum) and stack tail exchange

3) The size of the heap is reduced by 1, and call shift_down (0), the purpose is to adjust the new array of top data to the appropriate location

4) Repeat step 2 until the size of the heap is 1

Details: Heap sort

Algorithm eight: cardinality sorting

Cardinality sorting is a non-comparison integer sorting algorithm, the principle is to cut the whole number of digits into different digits, and then compare each digit separately. Because integers can also represent strings (such as names or dates) and floating-point numbers in a particular format, radix sorts are not limited to integers.

Before we say the cardinality sort, we briefly introduce the sort of buckets:

The idea is to divide the array into a limited number of buckets. Each bucket is then individually sorted (it is possible to use another sorting algorithm or to continue using the bucket sort to sort by recursion). Barrel sorting is a result of the sort of pigeon nest. Bar sort uses linear time (Θ (n)) when the values ​​in the array to be sorted are evenly distributed. However, bucket sorting is not a sort, and he is not affected by the lower limit of O (n log n).

In simple terms, the data is grouped, placed in a bucket, and then sort the inside of each bucket.

For example, to sort n integers A [1..n] in the size range [1..1000]

First, the bucket can be set to a size of 10, specifically, set B [1] stores integers [1..10], set B [2] stores (integer of 10..20), ... Set B [i] stores an integer ((i-1) * 10, i * 10], i = 1, 2, .. 100. There are a total of 100 buckets.

Then, scan A [1..n] all over again, placing each A [i] in the corresponding bucket B [j]. Then sort out the numbers in each of the 100 buckets, which are available for bubbling, selection, and even quick sorting. In general, any sorting method can be used.

Finally, the numbers in each bucket are output in turn, with the numbers in each bucket being output as small as possible, resulting in a sequence of all the numbers sorted.

Suppose there are n numbers and m buckets. If the numbers are evenly distributed, there are on average n / m numbers in each bucket. in case

With fast sorting of the numbers in each bucket, the complexity of the whole algorithm is

O (n + m * n / m * log (n / m)) = O (n + nlogn - nlogm)

It can be seen from the above equation that when m is close to n, the bucket ordering complexity approaches O (n)

Of course, the above complexity is calculated based on the assumption that the n numbers entered are evenly distributed. This assumption is very strong, the actual effect is not so good. If all the numbers fall in the same bucket, it degenerates into a general ordering.

In front of several major sorting algorithm, most of the time complexity is O (n2), there are some sorting algorithm time complexity is O (nlogn). The bucket sort can achieve the time complexity of O (n). But the shortcomings of the bucket sort is:

1) The first is that the space complexity is high, and the extra overhead required. Sorting has two arrays of space overhead, a stored array to be sorted, a so-called bucket, such as to be sorted value is from 0 to m-1, then you need m bucket, the bucket array will have at least m space.

2) followed by the elements to be sorted in a certain range and so on.

to sum up

Various sorts of stability, time complexity, space complexity, stability summarized as follows:

About time complexity:

(1) Square order (O (n2)) sort

Various types of simple sorting: direct insertion, direct selection and bubbling sorting;

(2) Linear logarithmic order (O (nlog2n)) ordering

Quick sort, heap sort and merge sort;

(3) O (n1 + §)), § is a constant between 0 and 1.

Hill sorting

(4) Order of linear order (O (n))

Base sorting, in addition to barrel, box sorting.

About stability:

Stable sorting algorithm: Bubble sort, insert sort, merge sort and cardinality sort

Not a stable sorting algorithm: Choose Sort, Quick Sort, Hill Sort, Stack Sort

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.