C/C ++ Summary of test Interview Questions 3-various sorting algorithms

Source: Internet
Author: User

 

Original article: blog.chinaunix.net/u/1222/showart_318070.html

A sorting algorithm is a basic and commonly used algorithm. Because the number of processes in actual work is huge, the sorting algorithm has a high requirement on the speed of the algorithm itself. In general, the performance of the so-called algorithm mainly refers to the complexity of the algorithm, which is generally expressed by the O method. I will give a detailed description later. I would like to give a brief introduction to the sorting algorithm and give an outline for this article.

I will analyze the algorithm from simple to difficult according to the complexity of the algorithm.
The first part is a simple sorting algorithm. What you will see later is that the algorithm complexity is O (n * n) (because the word is not used, the superscript and subscript cannot be typed ).
The second part is the advanced sorting algorithm with the complexity of O (log2 (n )). Here we only introduce one algorithm. There are also several algorithms that are not discussed here because they involve the concepts of trees and stacks.
The third part is similar to the brain. The two algorithms here are not the best (or even the slowest), but the algorithms themselves are odd and worth reference (from a programming perspective ). At the same time, we can also understand this problem from another perspective.
The fourth part is a post-dinner dessert that I gave to you-a template-based general-purpose quick sorting. Because it is a template function that can sort any data type (Sorry, some Forum experts mentioned it ).

I. Simple Sorting Algorithm
The program is relatively simple, so no comments are added. All programs provide complete Running code and run it in my VC environment. Because it does not involve content of MFC and windows, there should be no problems on Borland C ++ platforms. The running process is illustrated after the code, and it is helpful for understanding.

1. Bubble Method:(Gilbert: Click here to have a video)
This is the most primitive and well-known slowest algorithm. The origin of his name is because its work seems to be bubbling:
# Include <iostream. h>

Void bubblesort (int * pdata, int count)
{
Int itemp;
For (INT I = 1; I <count; I ++)
{
For (Int J = count-1; j> = I; j --)
{
If (pdata [J] <pdata [J-1])
{
Itemp = pdata [J-1];
Pdata [J-1] = pdata [J];
Pdata [J] = itemp;
}
}
}
}

Void main ()
{
Int data [] = {10, 9, 8, 7, 6, 5, 4 };
Bubblesort (data, 7 );
For (INT I = 0; I <7; I ++)
Cout <data [I] <"";
Cout <"/N ";
}

Reverse Order (worst case)
First round: 10, 9, 8, 7-> 10, 9, 7-> 10, 7, 9-> 7, 10, 9, 8 (three exchanges)
Round 2: 7, 10, 9-> 7, 10, 8-> 7, 8, 9 (2 exchanges)
First round: 7, 8, 10, 9-> 7, 8, 9, 10 (switching once)
Cycles: 6
Number of exchanges: 6

Others:
First round:,->, (exchange twice)
Round 2: 7, 8, 10, 9-> 7, 8, 10, 9-> 7, 8, 10, 9 (0 exchanges)
First round: 7, 8, 10, 9-> 7, 8, 9, 10 (switching once)
Cycles: 6
Number of exchanges: 3

We have given the program section above, and now we analyze it: here, the main part that affects our algorithm performance is loop and exchange. Obviously, the more times, the worse the performance. From the above program, we can see that the number of cycles is fixed, which is 1 + 2 +... + n-1.
The formula is 1/2 * (n-1) * n.
Note that the O method is defined as follows:

If there is a constant K and the starting point N0, so when n> = N0, F (n) <= K * g (N), F (N) = O (G (n )). (Don't say you didn't learn mathematics well. It is very important for programming mathematics !!!)

Now let's look at 1/2 * (n-1) * n. When k = 1/2, N0 = 1, g (n) = N * n, 1/2 * (n-1) * n <= 1/2 * n = K * g (n ). So (n) = O (G (N) = O (N * n ). So the complexity of our program loop is O (n * n ).
Let's look at the exchange. We can see from the table following the program that the two cases share the same loop and the exchange is different. In fact, the exchange itself has a great relationship with the degree of order of the data source. when the data is in reverse order, the number of exchanges is the same as the number of cycles (each cycle will be exchanged ), the complexity is O (n * n ). When the data is in positive order, there will be no exchange. The complexity is O (0 ). It is in the intermediate state in disordered order. For this reason, we usually compare algorithms by the number of cycles.


2. exchange method:

The procedures of the exchange method are the clearest and simplest. Each time, the current elements are compared and exchanged with the subsequent elements one by one.
# Include <iostream. h>
Void exchangesort (int * pdata, int count)
{
Int itemp;
For (INT I = 0; I <count-1; I ++)
{
For (Int J = I + 1; j <count; j ++)
{
If (pdata [J] <pdata [I])
{
Itemp = pdata [I];
Pdata [I] = pdata [J];
Pdata [J] = itemp;
}
}
}
}

Void main ()
{
Int data [] = {10, 9, 8, 7, 6, 5, 4 };
Exchangesort (data, 7 );
For (INT I = 0; I <7; I ++)
Cout <data [I] <"";
Cout <"/N ";
}
Reverse Order (worst case)
First round: 10, 9, 8, 7-> 9, 10, 8, 7-> 8, 10, 9-> 7, 10, 9, 8 (three exchanges)
Round 2: 7, 10, 9-> 7, 9, 10, 8-> 7, 8, 10, 9 (exchange twice)
First round: 7, 8, 10, 9-> 7, 8, 9, 10 (switching once)
Cycles: 6
Number of exchanges: 6

Others:
First round:,->, (exchange once)
Round 2: 7, 10, 8, 9-> 7, 8, 10, 9-> 7, 8, 10, 9 (exchange once)
First round: 7, 8, 10, 9-> 7, 8, 9, 10 (switching once)
Cycles: 6
Number of exchanges: 3

From the running table, the exchange is almost as bad as the bubble. This is true. The number of loops is also 1/2 * (n-1) * n, so the complexity of the algorithm is still O (N * n ). Because we cannot give all the information, we can only tell you that the exchange is equally bad (in some cases, slightly better, in some cases ).

3. Selection Method:
Now we can see a hope: the selection method improves the performance (in some cases)
This method is similar to our artificial sorting habits: We select the smallest value from the data to exchange with the first value, and select the smallest and the second one from the saved part.
# Include <iostream. h>
Void selectsort (int * pdata, int count)
{
Int itemp;
Int IPOs;
For (INT I = 0; I <count-1; I ++)
{
Itemp = pdata [I];
IPOs = I;
For (Int J = I + 1; j <count; j ++)
{
If (pdata [J] <itemp)
{
Itemp = pdata [J];
IPOs = J;
}
}
Pdata [IPOs] = pdata [I];
Pdata [I] = itemp;
}
}

Void main ()
{
Int data [] = {10, 9, 8, 7, 6, 5, 4 };
Selectsort (data, 7 );
For (INT I = 0; I <7; I ++)
Cout <data [I] <"";
Cout <"/N ";
}
Reverse Order (worst case)
First round:,-> (itemp = 9),-> (itemp = 8),-> (itemp = 7, 8, 10 (exchange once)
Round 2:,->, (itemp = 8)-> (itemp = 8), 8, 9, 10 (exchange once)
First round:,-> (itemp = 9), (0 switching)
Cycles: 6
Number of exchanges: 2

Others:
Round 1:,-> (itemp = 8),-> (itemp = 7),-> (itemp = 7), 10, 8, 9 (exchange once)
Round 2:,-> (itemp = 8),-> (itemp = 8), (exchange once)
First round:,-> (itemp = 9), (exchange once)
Cycles: 6
Number of exchanges: 3
Unfortunately, the number of loops required by the algorithm is still 1/2 * (n-1) * n. Therefore, the algorithm complexity is O (n * n ).
Let's look at his exchange. Each outer loop generates only one exchange (only one minimum value ). So F (n) <= N
So we have f (n) = O (n ).
Therefore, when data is messy, it can reduce the number of exchanges.


4. insert method:

The insertion method is more complex. The basic working principle is to draw a card, find the corresponding position in the front card, and then continue to the next
# Include <iostream. h>
Void insertsort (int * pdata, int count)
{
Int itemp;
Int IPOs;
For (INT I = 1; I <count; I ++)
{
Itemp = pdata [I];
IPOs = I-1;
While (IPOs> = 0) & (itemp <pdata [IPOs])
{
Pdata [IPOs + 1] = pdata [IPOs];
IPOs --;
}
Pdata [IPOs + 1] = itemp;
}
}

Void main ()
{
Int data [] = {10, 9, 8, 7, 6, 5, 4 };
Insertsort (data, 7 );
For (INT I = 0; I <7; I ++)
Cout <data [I] <"";
Cout <"/N ";
}

Reverse Order (worst case)
First round: 10, 9, 8, 7-> 9, 10, 8, 7 (switching once) (loop once)
Round 2: 9, 10, 8, 7-> 8, 9, 10, 7 (switching once) (looping twice)
First round: 8, 9, 10, 7-> 7, 8, 9, 10 (switching once) (looping three times)
Cycles: 6
Number of exchanges: 3

Others:
First round: 8, 10, 7, 9-> 8, 10, 7, 9 (0 switching) (1 loop)
Round 2: 8, 10, 7, 9-> 7, 8, 10, 9 (switching once) (looping twice)
First round: 7, 8, 10, 9-> 7, 8, 9, 10 (switching once) (loop once)
Number of cycles: 4
Number of exchanges: 2

The behavior analysis at the end of the preceding section actually creates an illusion that this algorithm is the best in a simple algorithm, but it is not because its number of loops is not fixed, we can still use the O method. From the above results, we can see that the number of cycles F (n) <= 1/2 * n * (n-1) <= 1/2 * n. So its complexity is still O (N * n) (here, we will explain that the number of exchanges can still be deduced if it is not to show the differences in these simple sorting ). Now let's look at the switching. In terms of appearance, the number of switching times is O (n) (derivation is similar to the selection method), but we need to perform the '=' operation with the same number of inner loops each time. In a normal exchange, we need three equal bits, but here we are obviously a little more, so we have wasted time.

In the end, I personally think that the selection method is the best in simple sorting algorithms.

Ii. advanced sorting algorithms:
In the advanced sorting algorithm, we will only introduce this one, and it is also the fastest I know (among the materials I have read.
It still looks like a binary tree. First, we select a median value in the middle program. We use the median value in the array, and then place the values smaller than the median value on the left and the larger values on the right (the specific implementation is to find from both sides, find a pair of backend switches ). Then use this process on both sides (the easiest method -- recursion ).

1. Quick sorting:
# Include <iostream. h>

Void run (int * pdata, int left, int right)
{
Int I, J;
Int middle, itemp;
I = left;
J = right;
Middle = pdata [(left + right)/2]; // calculates the median value.
Do {
While (pdata [I] <middle) & (I <right) // The number of scans from the left greater than the value
I ++;
While (pdata [J]> middle) & (j> left) // The number of scans from the right side greater than the value
J --;
If (I <= J) // a pair of values is found.
{
// Exchange
Itemp = pdata [I];
Pdata [I] = pdata [J];
Pdata [J] = itemp;
I ++;
J --;
}
} While (I <= J); // If the subscripts on both sides of the scan are staggered, stop (once completed)

// When the left part has a value (left <j), recursive left half edge
If (left <j)
Run (pdata, left, J );
// When the right part has a value (Right> I), recursive Right Half Edge
If (Right> I)
Run (pdata, I, right );
}

Void quicksort (int * pdata, int count)
{
Run (pdata, 0, Count-1 );
}

Void main ()
{
Int data [] = {10, 9, 8, 7, 6, 5, 4 };
Quicksort (data, 7 );
For (INT I = 0; I <7; I ++)
Cout <data [I] <"";
Cout <"/N ";
}

I have not provided behavior analysis here, because this is very simple. We will analyze the algorithm directly: first, we will consider the ideal situation.
1. The size of the array is the power of 2, so that the split can always be divisible by 2. Assume that it is the k power of 2, that is, K = log2 (n ).
2. Each time we select a value that is just a median value, the array can be classified.
First layer recursion, loop N times, second layer loop 2*(N/2 )......
So there are n + 2 (n/2) + 4 (n/4) +... + N * (n/n) = N +... + n = K * n = log2 (n) * n
Therefore, the algorithm complexity is O (log2 (n) * n)
In other cases, it will only be worse than this case. The worst case is that the middle selected each time is the minimum or maximum value, so it will become the exchange method (because recursion is used, the situation is worse ). But what do you think is the probability of such a situation ?? You don't have to worry about this issue. Practice has proved that quick sorting is always the best in most cases.
If you are worried about this problem, you can use heap sorting. This is a stable O (log2 (n) * n) algorithm, but it is usually slow.
In quick sorting (because the heap needs to be reorganized ).

Iii. Other sorting
1. Bidirectional bubbling:
Generally, the bubble is unidirectional, and here it is bidirectional, that is, reverse work is required.
The code looks complicated. After careful consideration, you can see that it is a round-trip method.
The author of this Code thinks this can reduce some exchanges on the basis of bubbling (I don't think so, maybe I am wrong ).
I think this is an interesting piece of code.
# Include <iostream. h>
Void bubble2sort (int * pdata, int count)
{
Int itemp;
Int left = 1;
Int right = count-1;
Int T;
Do
{
// Positive part
For (INT I = right; I> = left; I --)
{
If (pdata [I] <pdata [I-1])
{
Itemp = pdata [I];
Pdata [I] = pdata [I-1];
Pdata [I-1] = itemp;
T = I;
}
}
Left = t + 1;

// Reverse part
For (I = left; I <right + 1; I ++)
{
If (pdata [I] <pdata [I-1])
{
Itemp = pdata [I];
Pdata [I] = pdata [I-1];
Pdata [I-1] = itemp;
T = I;
}
}
Right = T-1;
} While (left <= right );
}

Void main ()
{
Int data [] = {10, 9, 8, 7, 6, 5, 4 };
Bubble2sort (data, 7 );
For (INT I = 0; I <7; I ++)
Cout <data [I] <"";
Cout <"/N ";
}

2. Shell sorting
This sorting is very complicated and you will know after reading the program.
First, we need a decreasing step. Here we use 9, 5, 3, and 1 (the last step must be 1 ).
The principle is to first sort all the content of 9-1 elements, and then sort 5-1 elements in the same way.
# Include <iostream. h>
Void shellsort (int * pdata, int count)
{
Int step [4];
Step [0] = 9;
Step [1] = 5;
Step [2] = 3;
Step [3] = 1;

Int itemp;
Int K, S, W;
For (INT I = 0; I <4; I ++)
{
K = step [I];
S =-K;
For (Int J = K; j <count; j ++)
{
Itemp = pdata [J];
W = J-K; // calculate the subscript of the last step element
If (S = 0)
{
S =-K;
S ++;
Pdata [s] = itemp;
}
While (itemp <pdata [w]) & (W> = 0) & (W <= count ))
{
Pdata [W + k] = pdata [w];
W = W-K;
}
Pdata [W + k] = itemp;
}
}
}

Void main ()
{
Int data [] = {10, 9, 8, 7, 6, 5, 4, 3, 2, 1,-10,-1 };
Shellsort (data, 12 );
For (INT I = 0; I <12; I ++)
Cout <data [I] <"";
Cout <"/N ";
} For your security, please open only the URL with reliable source

Open website {
Share. safelink. Close (); Return false;
} "Href =" http://writeblog.csdn.net/# "> cancel

From: http://hi.baidu.com/gilbertjuly/blog/item/7c1cc4c7c28b5d129c163d07.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.