This article from: http://blog.csdn.net/v_july_v/article/details/6116297
I. Basic Features of quick sorting algorithms
Time Complexity: O (N * lgn)
Worst: O (N ^ 2)
Space complexity: O (N * lgn)
Unstable.
Quick sorting is a Sort Algorithm. For an input array containing N numbers, the average time is O (nlgn), and the worst case is O (n ^ 2 ).
It is usually the best choice for sorting. Because, the sorting speed is the fastest, it can only reach O (nlgn ).
Ii. Description of the Quick Sort Algorithm
Introduction to algorithms, Chapter 2
Fast sorting is processed based on the shard mode,
There are three steps to sort the sub-array a [p... R:
1. decomposition:
A [P. R] is divided into two (possibly empty) Sub-arrays A [p .. q-1] And a [q + 1 .. R], making
A [p .. q-1] <= A [Q] <= A [q + 1 .. R]
2. Solution: by calling recursive fast sorting, sub-arrays A [p .. q-1] And a [q + 1 .. R] are sorted.
3. merge.
Iii. Quick Sorting Algorithm
Version 1:
Quicksort (A, P, R)
1 If P <r
2 then Q partition (A, P, R) // key
3 quicksort (A, p, q-1)
4 quicksort (A, q + 1, R)
Array Division
The key to the fast sorting algorithm is the partition process, which rearranges a [P. R] in place:
Partition (A, P, R)
1 x region a [R]
2 I then p-1
3 For J branch P to R-1
4 do if a [J] ≤ x
5 then I have I + 1
6 exchange a [I] <-> A [J]
7 exchange a [I + 1] <-> A [R]
8 return I + 1
Okay. Let's take a specific and complete example.
To quickly sort the following arrays,
2 8 7 1 3 5 6 4 (Principal Component)
I,
I p/J
2 8 7 1 3 5 6 4 (Principal Component)
J Refers to 2 <= 4, so I ++, I also refers to 2, 2 and 2 interchange, the original array remains unchanged.
J. Move back until it points to 1 ..
II,
J (pointing to 1) <= 4, so I ++
I points to 8, so 8 is exchanged with 1.
The array is changed:
I j
2 1 7 8 3 5 6 4
3. j moves backward, pointing to 3, 3 <= 4, so I ++
I pointed to 7, so 7 exchanged with 3.
The array is changed:
I j
2 1 3 8 7 5 6 4
4. j continues to move backward and finds that there is no smaller number than 4. Therefore, the last step is implemented,
That is, the first line of the preceding partition (A, P, R) code.
Therefore, I moves a unit behind and points to 8
I j
2 1 3 8 7 5 6 4
A [I + 1] <-> A [R], that is, 8 is exchanged with 4. Therefore, the array is eventually changed to the following form,
2 1 3 4 7 5 6 8
OK. The first step of quick sorting is completed.
4. Divide the entire array into two parts: 2 1, 3, 7 5 6 8, and then recursively sort the two parts.
I p/J
2 1 3 (Principal Component)
2 and 2 swap, unchanged, and then 1 and 1 swap, or unchanged, finally, 3 and 3 swap, unchanged,
In the end, 3 divides 2 1 3 into two parts: 2 1, and 3.
Then, for 2 1, recursive sorting, the final result is 1 2 3.
7 5 6 8 (main component), 7, 5, 6, and 8 are smaller, so the first trip is 7 5 6 8,
However, at this moment, 8 divides 7 5 6 8 into 7 5 6 and 8. [7 5 6-> 5 7 6-> 5 6 7]
Then, recursively sorts 7 5 6 and the final result is 5 6 7 8.
OK. All processes are analyzed.
Finally, let's take a look at my picture:
Quick Sort Algorithm Version 2
However, this version no longer selects (the first version above) the last element of the array as the primary element,
Instead, the first element in the array is the primary element.
/*************************************** ***********/
/* Function: quick sorting algorithm */
/* Function parameter: tab of the pointer variable of the structure table */
/* Subscript of the left and right sides of Integer Variables */
/* Function return value: NULL */
/* File name: quicsort. c Function Name: quicksort ()*/
/*************************************** ***********/
Void quicksort (table * tab, int left, int right)
{
Int I, J;
If (left <right)
{
I = left; j = right;
Tab-> r [0] = tab-> r [I]; // you can use the leftmost element value as the standard. Save the value first.
Do
{
While (tab-> r [J]. Key> tab-> r [0]. Key & I <j)
J --; // find 1st from the right to the leftLessLocation of standard value J
If (I <j) // found, Location: J
{
Tab-> r [I]. Key = tab-> r [J]. Key; I ++;
} // Place element J on the left side and reset I
While (tab-> r [I]. Key <tab-> r [0]. Key & I <j)
I ++; // find 1st from left to rightGreaterStandard Value position I
If (I <j) // found, the position is I
{
Tab-> r [J]. Key = tab-> r [I]. Key; j --;
} // Place the I-th element on the right and reset J
} While (I! = J );
Tab-> r [I] = tab-> r [0]; // place the standard value to its final position. The Division ends.
Quicksort (tab, left, I-1); // recursively call this function on the left half of the standard value
Quicksort (tab, I + 1, right); // recursively calls this function on the right of the standard value
}
}
----------------
OK. Let's use version 2 of the fast Sorting Algorithm to demonstrate the sorting process based on the same array:
This time, the first element 2 in the array is the primary element.
2 (master) 8 7 1 3 5 6 4
Please take a closer look:
2 8 7 1 3 5 6 4
I-> <-J
(Looking for big data) (looking for small data)
I. J
J. Find the first element less than 2 and assign it to (overwrite reset) element 2
Get:
1 8 7 3 5 6 4
I j
II. I
I found the first element greater than 2, 8 and assigned to (overwrite reset) the element referred to by J (null <-8)
1 7 8 3 5 6 4
I <-J
Iii. j
J continues to shift left. No elements smaller than 2 were found before meeting with I.
At last, add the Primary 2.
After the first row ends, the array is changed:
1 2 7 8 3 5 6 4
Second trip,
7 8 3 5 6 4
I-> <-J
(Looking for big data) (looking for small data)
I. J
J finds 4, which is smaller than principal 7, and 4 is assigned to the position of 7.
Get:
4 8 3 5 6
I-> J
II. I
I. Find the first element 8, which is larger than 7, and overwrite the element (null) indicated by J)
4 3 5 6 8
I j
4 6 3 5 8
I-> J
I met J and ended.
Third trip:
4 6 3 5 7 8
......
Below, the analysis principle is consistent, skipped.
The final result is shown in:
1 2 3 4 5 6 7 8
I believe that after the specific analysis of the above content, you must understand.
Finally, I will paste the image I drew about the sorting process:
. Supplement in April January 5.
OK. You can understand one of the above two algorithms.
-------------------------------------------------------------
5. Worst case and fastest case of quick sorting.
Worst caseWhen the two regions generated during the division process contain n-1 elements and 0 elements,
This is assumed that the algorithm appears during every recursive call, which is asymmetrical. Then the Division cost is O (n ),
This is because T (0) = O (1) is returned after recursive calls to an array of 0 ).
The run time of the estimation method can be recursively expressed:
T (n) = T (n-1) + T (0) + O (n) = T (n-1) + O (n ).
It can be proved to be T (n) = O (N ^ 2 ).
Therefore, if the division is asymmetric to the maximum extent on each layer of the algorithm recursion, then the running time of the algorithm is O (n ^ 2 ).
That is to say, the worst case of a fast sorting algorithm is not better than insert sorting.
In addition, after the array is fully sorted, The Run Time of the Quick Sort is O (n ^ 2 ).
In the same case, the run time of insert sorting is O (n ).
// Note, please understand this sentence. The time complexity of a sort is only for one element.
// It means to insert and sort an element, that is, to insert it into an ordered sequence, and the time spent is N.
Let's prove that, in the shortest case,That is, in the most balanced division that partition may do, each subproblem cannot be greater than n/2.
Because the size of one of the sub-problems is | _ n/2 _ |. The size of another subproblem is |-n/2-|-1.
In this case, fast sorting is much faster. Is,
T (n) <= 2 T (n/2) + O (N). It can be proved that T (n) = O (nlgn ).
Intuitively, quick sorting is a recursive number, where partition always produces a division,
The total running time is O (nlgn ). Each node shows the scale of the sub-problem. The cost of each layer is displayed on the right.
Each layer contains a constant C.
.
July and July January 4, 2011.
========================================================== =====
Which of the following versions corresponds to the above version?
HOARE-PARTITION (A, P, R)
1 x branch a [p]
2 I then p-1
3 J branch R + 1
4 While true
5 do repeat J 1_j-1
6 until a [J] ≤ x
7 repeat I found I + 1
8 until a [I] ≥x
9 if I <j
10 then exchange a [I] a [J]
11 else return J
I often think about why some people clearly understand an algorithm,
After a while, it is completely unfamiliar with this algorithm and does not know the column?
I think the root cause is that I still don't fully understand the principles of this sort algorithm...
The improvement column can only be found by the original author who invented the algorithm. From the original author, more useful things can be mined.
July, updated on April 9, February 15, 2011.
========================================================== =
Finally, a simple example of a fast sorting algorithm is provided:
Quicksort Function
Void quicksort (int l, int U)
{Int I, m;
If (L> = u) return;
Swap (L, randint (L, u ));
M = L;
For (I = L + 1; I <= u; I ++)
If (X [I] <X [l])
Swap (++ m, I );
Swap (L, M );
Quicksort (m-1 );
Quicksort (m + 1, U );
}
If the function is called in the form of quicksort (0, n-1), this code sorts a Global Array X [N.
The two parameters of the function are the subscript of the sub-array to be sorted: L is the lower subscript, and U is the higher subscript.
Function call swap (I, j) will swap the two elements x [I] and X [J.
During the first switching operation, a division element is randomly selected between l and U in a uniform distribution mode.