9.9.1 Quick Sort Introduction

Finally, our master is coming up, if you work in the future, your boss will let you write a sorting algorithm, and you will be the algorithm is not a quick sort, I think you still do not, secretly go to the fast sorting algorithm to knock into the computer, so at least you can not be teased.

In fact, some of its implementations can be found in the source code of the C + + STL, Java SDK, or the. NET FrameWork SDK, among other development kits.

The fast sort algorithm, designed by the Turing prize-winning Tony Hoare, was one of the greatest computer scientists of the last century, with a remarkable contribution to formal methodology and the invention of the ALGOL60 programming language. And this fast-sorting algorithm is just one of his many contributions to a small invention.

What's more, the fast sorting algorithm that we're going to learn now is listed as one of the 10 big algorithms of the 20th century. There's no reason for us programmers to learn it.

Hill sort is equivalent to the escalation of the direct insert sort, which is the same as the Insert sort class, and the heap sort is equivalent to the upgrade of the simple selection sort, which belongs to the Select Sort class. And the quick sort is actually the upgrade we thought was the slowest bubble sort, all of which belong to an exchange sort class. That is, it is also done by continuous comparison and mobile switching to achieve the order, but its implementation, increase the record of comparison and moving distance, the keyword larger record directly from the front to the back, the keyword smaller records moved directly from the back to the previous, thereby reducing the total number of comparisons and the number of mobile exchanges.

9.9.2 Fast Sorting algorithm

The basic idea of quick sort is: by sorting the rows to be separated into two separate parts, some of which are less critical than the other, the records can be sorted separately to achieve the order of the whole sequence.

Literally doesn't feel the benefit of it. Suppose you want to sort the array {50,10,90,30,70,40,80,60,20} now. We learn the subtleties of quick sorting by explaining the code.

Let's look at the code.

/* for sequential table L for fast ordering
/void QuickSort (SqList *l)
{
qsort (l,1,l->length);
}

Again, the code, like the merge sort, requires a recursive call, so we encapsulate a function outside. Now let's look at the implementation of Qsort.

/* L->r[low The subsequence in the sequential table L. High] for quick sort
/void Qsort (SqList *l,int low,int High)
{
int pivot;
if (Low

From here, you should be able to understand the preceding code "qsort (l,1,l->length);" In the 1 and L->length code meaning, it is currently to be sorted by the minimum subscript low and the maximum subscript value high. The core of the

code is "pivot=partition (L,low,high);" Before executing it, the L.R array value is {50,10,90,30,70,40,80,60,20}. Partition function to do is to first select one of the keywords, such as the first keyword 50, and then try to put it in a position, so that its left side of the value is smaller than it, the value on the right is larger. We refer to such keywords as pivot (pivot).

after the execution of partition (l,1,9), the array becomes {20,10,40,30,50,70,80,60,90}, and returns a value of 5 to pivot, and the number 5 indicates that 50 is placed in the position of the array subscript 5. At this point, the computer turns the original array into two 50 left and right decimal groups {20,10,40,30} and {70,80,60,90}, and then recursively calls "qsort (l,1,5-1);" and "Qsort (l,5+1,9);" statement, the same partition operations are performed on {20,10,40,30} and {70,80,60,90} respectively until the order is all correct.

Here, it should be said that it is not difficult to understand. Let's take a look at the most critical partition function implementations of the quick sort.

/* Exchange Sequence Table L Neutron table records, so that the pivot record in place, and return to its location/
//* The record before it (after) is not (small) in it. * *
int Partition (sqlist *l,int low,int High)
{
int pivotkey;
pivotkey=l->r[low]; /* Use the first record of the child table as a pivot record/while
(Low

1 The program starts to execute, at this time low=1,high=l.length=9. Line 4th, we assign the l.r[low]=l.r[1]=50 to the pivot variable PivotKey, as shown in Figure 9-9-1.

2 the 5th to 13th Act while loop, currently low=13) line 7th, l.r[high]= l.r[9]=20≯pivotkey=50, so do not execute line 8th.

4 line 9th, Exchange L.r[low] with the value of L.r[high], making l.r[1]=20,l.r[9]=50. The reason for the exchange is that, because the comparison of line 7th knows that L.r[high] is a smaller value than the pivotkey=50 (i.e. l.r[low), it should be swapped to the left of 50, as shown in Figure 9-9-2.

5 line 10th, when l.r[low]= L.r[1]=20,pivotkey=50,l.r[low]<pivotkey, so the 11th line, low++, at this time low=2. Continue to circulate, l.r[2]=10<50,low++, at this time low=3. L.r[3]=90>50, exit the loop.

6 line 12th, Exchange L.r[low]=l.r[3] with the value of l.r[high]=l.r[9], making l.r[3]=50,l.r[9]=90. This is equivalent to swapping a value of 50 to the right of 50. Note that the low already points to 3, as shown in Figure 9-9-3.

7 Continue line 5th, because Low=38 Line 7th, when L.r[high]= L.r[9]=90,pivotkey=50,l.r[high]>pivotkey, so the 8th line, high--, at this time high=8. Continue to circulate, l.r[8]=60>50,high--, at this time high=7. l.r[7]=80>50,high--, at this time high=6. L.r[6]=40<50, exit the loop.

9 line 9th, swap l.r[low]=l.r[3]=50 and l.r[high]=l.r[6]=40 values to make l.r[3]=40,l.r[6]=50, as shown in Figure 9-9-4.

10 line 10th, when l.r[low]= L.r[3]=40,pivotkey=50,l.r[low]<pivotkey, so the 11th line, low++, at this time low=4. Continue to cycle l.r[4]=30<50,low++, at this time low=5. L.r[5]=70>50, exit the loop.

11 line 12th, swap l.r[low]=l.r[5]=70 and l.r[high]=l.r[6]=50 values to make l.r[5]=50,l.r[6]=70, as shown in Figure 9-9-5.

12) Recycle again. Because of low=5

13 last line 14th, return the value of low of 5. function execution completed. The next step is recursive invocation "qsort (l,1,5-1);" and "Qsort (l,5+1,9);" statement, perform the same partition operation on {20,10,40,30} and {70,80,60,90}, until the order is all correct. We'll stop demonstrating.

Through the simulation of this code, we should be able to understand that the partition function, in fact, is the choice of PivotKey constantly exchanged, will be smaller than it to its left, bigger than it to its right, it also in the exchange constantly change their position, until fully meet this requirement.

Fast sequencing complexity analysis of 9.9.3

Let's analyze the performance of the fast sorting method. The time performance of fast sorting depends on the depth of fast sort recursion, which can be used to describe the execution of recursive algorithms. As shown in Figure 9-9-7, it is the recursive process of {50,10,90,30,70,40,80,60,20} in the quick sort process. Since our first keyword is 50, which is exactly the middle value of the sequence to be sorted, the recursive tree is balanced, and the performance is better at this time.

In the optimal case, the partition is divided evenly every time, if you sort n keywords, the depth of the recursive tree is ⌊log 2n⌋+1 (⌊x⌋ represents the largest integer that is not greater than X), that is, only need to be recursive log 2n times, takes time t (N), The first partiation should be a scan of the entire array, doing n-time comparisons. Then, the obtained pivot divides the array into two, then each need T (N/2) time (note is the best case, so it is split in half). And so the division continues, we have the following inequality inference:

T (n) ≤2t (N/2) +n,t (1) =0

T (n) ≤2 (2T (N/4) +N/2) +n=4t (N/4) +2n

T (n) ≤4 (2T (N/8) +N/4) +2n=8t (N/8) +3n

......

T (n) ≤nt (1) + (log 2n) xn= O (Nlog 2n)

In other words, in the optimal case, the time complexity of the fast sort algorithm is O (NLOGN).

In the worst case, the sequence to be sorted is either positive or reverse, with each partition having only one subsequence smaller than the previous one, noting that the other is empty. If a recursive tree is drawn, it is a slanted tree. At this point, you need to perform a n-1 recursive call, and the I partition requires a comparison of the N-i keyword to find the first record, which is the position of the pivot, because of the number of comparisons, and the final time complexity of O (N2).

On average, the pivot key word should be in position K (1≤k≤n), then

It can be proved by mathematical induction that the order of magnitude is O (Nlogn).

As far as space complexity is concerned, it is mainly the use of stack space due to recursion, best case, the recursive tree depth is log2n, its space complexity is also O (LOGN), the worst case, need to carry out n-1 recursive call, its space complexity is O (n), average situation, space complexity is also O (Logn).

Unfortunately, fast sorting is an unstable sort method because the keyword comparisons and exchanges are jumps.

(The next article will explain the various optimizations for quick sorting)

Source: http://www.cnblogs.com/cj723/archive/2011/04/27/2029993.html