C + + sorting algorithm

Source: Internet
Author: User
C + + sorting algorithm (2009-03-25-09:05:47)
Tags: sorting algorithm loop number Category: Programming development

The sorting algorithm is a basic and commonly used algorithm. Because of the huge amount of processing in the actual work, the sorting algorithm has a high speed requirement for the algorithm itself.
In general, the performance of our so-called algorithms mainly refers to the complexity of the algorithm, the general use O method to express. I'll give you a detailed explanation in the back.
For the sorting algorithm I would like to do a simple introduction, but also for this article to make an outline.
I will analyze the algorithm from simple to difficult according to the complexity of the algorithm.
The first part is a simple sort algorithm, and later you'll see what they have in common is the algorithmic Complexity O (n*n) (because you don't use Word, you can't play superscript and subscript).
The second part is the advanced sorting algorithm, with the Complexity O (Log2 (N)). Here we only introduce an algorithm. There are also several algorithms that involve the concept of tree and heap, so this is not discussed here.
The third part is similar to brains. The two algorithms here are not the best (or even the slowest), but the algorithm itself is rather peculiar and worth referencing (programming perspective). At the same time, we can understand the problem from another angle.
Part IV is a dessert that I gave to everyone-a generic quick sort based on templates. Since it is the template function that can be sorted on any data type (sorry, some of the forum experts used it).

Now, let's get started:

First, simple sorting algorithm
Because the program is relatively simple, so there is no comment. All programs are given a complete run code, and in my VC environment
Run through. Because there are no MFC and Windows content involved, there should be no Borland C + + platform
Problem. At the back of the code gives a schematic of the running process, which hopefully helps with understanding.
1. Bubble Method:
This is the most primitive, and also known as the slowest algorithm. The origin of his name was because its work seemed to be bubbling:
#include <iostream.h>
void Bubblesort (int* pdata,int Count)
{
int itemp;
for (int i=1;i<count;i++)
{
for (int j=count-1;j>=i;j--)
{
if (Pdata[j]<pdata[j-1])
{
ITEMP = Pdata[j-1];
PDATA[J-1] = Pdata[j];
PDATA[J] = itemp;
}
}
}
}
void Main ()
{
int data[] = {10,9,8,7,6,5,4};
Bubblesort (data,7);
for (int i=0;i<7;i++)
cout<<data[i]<< "";
cout<< "n";
}
Reverse (worst case)
First round: 10,9,8,7->10,9,7,8->10,7,9,8->7,10,9,8 (Exchange 3 times)
Second round: 7,10,9,8->7,10,8,9->7,8,10,9 (Exchange 2 times)
First round: 7,8,10,9->7,8,9,10 (Exchange 1 times)
Cycle times: 6 times
Number of exchanges: 6 times
Other:
First round: 8,10,7,9->8,10,7,9->8,7,10,9->7,8,10,9 (Exchange 2 times)
Second round: 7,8,10,9->7,8,10,9->7,8,10,9 (Exchange 0 times)
First round: 7,8,10,9->7,8,9,10 (Exchange 1 times)
Cycle times: 6 times
Number of exchanges: 3 times
Above we give the program segment, now we analyze it: here, the main part that affects the performance of our algorithm is the loop and the exchange,
Obviously, the more times, the worse the performance. From the above program we can see that the number of loops is fixed, for 1+2+...+n-1.
Written as a formula is 1/2* (n-1) *n.
Now notice that we give the definition of the O method:
If there is a constant K and a starting point n0, so that when n>=n0, there is f (n) <=k*g (n), then f (n) = O (g (n)). (hehe, don't say no
Learn maths well, for programming math is very important ... )
Now let's look at 1/2* (n-1) *n, when K=1/2,n0=1,g (n) =n*n, 1/2* (n-1) *n<=1/2*n*n=k*g (n). So f (n)
=o (g (n)) =o (n*n). So the complexity of our program loop is O (n*n).
Look at the exchange again. From the table following the program, you can see that the loops in both cases are the same and the exchange is different. In fact, the exchange itself has a great relationship with the order of the data source, when the data is in reverse condition, the exchange times are the same as the cycle (each cycle of judgment will be exchanged), the complexity of O (n*n). When the data is in positive order, there will be no exchange. The degree of complexity is O (0). Be in the middle state when disorderly order. It is for this reason that we usually compare the algorithms by the number of cycles.

2. Exchange Law:
The procedure for exchanging is the clearest and simplest, which is compared and exchanged each time with the following elements of the current element one by one.
#include <iostream.h>
void Exchangesort (int* pdata,int Count)
{
int itemp;
for (int i=0;i<count-1;i++)
{
for (int j=i+1;j<count;j++)
{
if (Pdata[j]<pdata[i])
{
ITEMP = Pdata[i];
Pdata[i] = Pdata[j];
PDATA[J] = itemp;
}
}
}
}
void Main ()
{
int data[] = {10,9,8,7,6,5,4};
Exchangesort (data,7);
for (int i=0;i<7;i++)
cout<<data[i]<< "";
cout<< "n";
}
Reverse (worst case)
First round: 10,9,8,7->9,10,8,7->8,10,9,7->7,10,9,8 (Exchange 3 times)
Second round: 7,10,9,8->7,9,10,8->7,8,10,9 (Exchange 2 times)
First round: 7,8,10,9->7,8,9,10 (Exchange 1 times)
Cycle times: 6 times
Number of exchanges: 6 times
Other:
First round: 8,10,7,9->8,10,7,9->7,10,8,9->7,10,8,9 (Exchange 1 times)
Second round: 7,10,8,9->7,8,10,9->7,8,10,9 (Exchange 1 times)
First round: 7,8,10,9->7,8,9,10 (Exchange 1 times)
Cycle times: 6 times
Number of exchanges: 3 times
From the table that runs, swapping is almost as bad as bubbling. This is true indeed. The number of cycles is also 1/2* (n-1) *n, so the complexity of the algorithm is still O (n*n). Since we cannot give all the information, we can only tell you that they are equally bad in exchange (in some cases slightly better, in some cases slightly worse).
3. Choice of law:
Now we can finally see a little hope: Selection method, this method improves a bit of performance (in some cases) this method is similar to our artificial sorting habits: Select the smallest in the data with the first value exchange, in the saved part of the smallest and second exchange, so reciprocating.
#include <iostream.h>
void Selectsort (int* pdata,int Count)
{
int itemp;
int IPos;
for (int i=0;i<count-1;i++)
{
ITEMP = Pdata[i];
IPos = i;
for (int j=i+1;j<count;j++)
{
if (pdata[j]<itemp)
{
ITEMP = Pdata[j];
IPos = j;
}
}
Pdata[ipos] = Pdata[i];
Pdata[i] = itemp;
}
}
void Main ()
{
int data[] = {10,9,8,7,6,5,4};
Selectsort (data,7);
for (int i=0;i<7;i++)
cout<<data[i]<< "";
cout<< "n";
}
Reverse (worst case)
First round:10,9,8,7-> (itemp=9) 10,9,8,7-> (itemp=8) 10,9,8,7-> (itemp=7) 7,9,8,10 (Exchange 1 times)
Second round: 7,9,8,10->7,9,8,10 (itemp=8)-> (itemp=8) 7,8,9,10 (Exchange 1 times)
First round:7,8,9,10-> (itemp=9) 7,8,9,10 (Exchange 0 times)
Cycle times: 6 times
Number of exchanges: 2 times
Other:
First round:8,10,7,9-> (itemp=8) 8,10,7,9-> (itemp=7) 8,10,7,9-> (itemp=7) 7,10,8,9 (Exchange 1 times)
Second round:7,10,8,9-> (itemp=8) 7,10,8,9-> (itemp=8) 7,8,10,9 (Exchange 1 times)
First round:7,8,10,9-> (itemp=9) 7,8,9,10 (Exchange 1 times)
Cycle times: 6 times
Number of exchanges: 3 times
Unfortunately, the algorithm needs the number of cycles is still 1/2* (n-1) *n. Therefore the algorithm complexity is O (n*n).
We came to see his exchange. Because each outer loop produces only one exchange (a minimum value). So f (n) <=n
So we have f (n) =o (n). Therefore, when the data is more chaotic, you can reduce the number of exchanges.

4. Insert Method:
The insertion method is more complex, its basic principle is to draw the card, in front of the card to find the appropriate position to insert, and then continue to the next one
#include <iostream.h>
void Insertsort (int* pdata,int Count)
{
int itemp;
int IPos;
for (int i=1;i<count;i++)
{
ITEMP = Pdata[i];
IPos = i-1;
while ((ipos>=0) && (Itemp<pdata[ipos])
{
PDATA[IPOS+1] = Pdata[ipos];
ipos--;
}
PDATA[IPOS+1] = itemp;
}
}

void Main ()
{
int data[] = {10,9,8,7,6,5,4};
Insertsort (data,7);
for (int i=0;i<7;i++)
cout<<data[i]<< "";
cout<< "n";
}
Reverse (worst case)
First round: 10,9,8,7->9,10,8,7 (Exchange 1 times) (1 cycles)
Second round: 9,10,8,7->8,9,10,7 (Exchange 1 times) (2 cycles)
First round: 8,9,10,7->7,8,9,10 (Exchange 1 times) (3 cycles)
Cycle times: 6 times
Number of exchanges: 3 times
Other:
First round: 8,10,7,9->8,10,7,9 (Exchange 0 times) (1 cycles)
Second round: 8,10,7,9->7,8,10,9 (Exchange 1 times) (2 cycles)
First round: 7,8,10,9->7,8,9,10 (Exchange 1 times) (1 cycles)
Cycle times: 4 times
Number of exchanges: 2 times
The behavioral analysis at the end of the paper actually creates a false impression that this algorithm is the best in a simple algorithm, but it is not,
Because the number of cycles is not fixed, we can still use the O method. From the above results, we can see that the number of cycles f (n) <=
1/2*n* (n-1) <=1/2*n*n. So its complexity is still O (n*n) (here's a note, in fact, if not to show these simple
Different sorting, the number of exchanges can still be deduced in this way. Now look at the exchange, from the outward appearance, the number of exchanges is O (n) (deduced similar
Selection method), but each time we have to do the same number of internal loop ' = ' operation. A normal exchange we need three times ' = '
And it's obviously a little bit more, so we're wasting time.
Finally, I personally think that in the simple sorting algorithm, the selection method is the best.

Second, the Advanced sorting algorithm:
In the advanced sort algorithm we will only introduce this one, and at the same time I know (in the data I read) the fastest.
Its work still looks like a binary tree. First we select an intermediate value middle program in which we use the median value of the array, and then
Put smaller than it on the left, the big on the right (the specific implementation is to find from both sides, find a pair after the exchange). Then on both sides of the separate make
Use this process (the easiest way-recursion).
1. Quick sort:
#include <iostream.h>
void Run (int* pdata,int left,int right)
{
int i,j;
int middle,itemp;
i = left;
j = right;
Middle = pdata[(left+right)/2]; Find Middle Value
do{
while ((Pdata[i]<middle) && (i<right))//left-scan is greater than the median value
i++;
while ((Pdata[j]>middle) && (j>left))//scan from right is greater than the median value
j--;
if (I&LT;=J)//found a pair of values
{
Exchange
ITEMP = Pdata[i];
Pdata[i] = Pdata[j];
PDATA[J] = itemp;
i++;
j--;
}
}while (I&LT;=J)//If the subscript scanned on both sides is staggered, stop (once)
When the left part has a value (left<j), recursive left half
if (LEFT&LT;J)
Run (PDATA,LEFT,J);
When the right part has a value (right>i), recursive right half
if (right>i)
Run (pdata,i,right);
}
void QuickSort (int* pdata,int Count)
{
Run (pdata,0,count-1);
}
void Main ()
{
int data[] = {10,9,8,7,6,5,4};
QuickSort (data,7);
for (int i=0;i<7;i++)
cout<<data[i]<< "";
cout<< "n";
}
Here I do not give an analysis of behavior, because this is very simple, we directly to analyze the algorithm: First of all, we consider the ideal situation
1. The size of an array is a power of 2, which can always be divisible by 2. Suppose to be 2 K-th side, i.e. K=LOG2 (n).
2. Each time we choose a value that is just an intermediate value, the array can be divided equally.
First-level recursion, cyclic n-times, second-layer cyclic 2* (N/2) ...
So altogether n+2 (N/2) +4 (N/4) +...+n* (n/n) = N+N+N+...+N=K*N=LOG2 (n) *n
So the algorithmic complexity is O (log2 (n) *n)
Other situations are only worse than this, and the worst case scenario is that every middle is a minimum or maximum value, and he will change
Transaction change (due to the use of recursion, the situation is worse). But how likely do you think this is going to happen? Oh, you completely
Do not worry about this problem. Practice has proved that in most cases, a quick sort is always the best.
If you're worried about this, you can use heap sorting, which is a stable O (log2 (n) *n) algorithm, but usually slower than fast sorting (because you want to regroup the heap).
Third, other sort
1. Bidirectional bubbling:
The usual bubbling is one-way, and this is two-way, which means you have to work backwards.
The code looks complicated, and it's clear that it's a way to shake back and forth.
The author of this code thinks it can reduce some of the exchange on a bubbling basis (I don't think so, maybe I was wrong).
Anyway, I think it's an interesting piece of code that's worth looking at.
#include <iostream.h>
void Bubble2sort (int* pdata,int Count)
{
int itemp;
int left = 1;
int right =count-1;
int t;
Todo
{
The positive part
for (int i=right;i>=left;i--)
{
if (Pdata[i]<pdata[i-1])
{
ITEMP = Pdata[i];
Pdata[i] = pdata[i-1];
Pdata[i-1] = itemp;
t = i;
}
}
left = t+1;
The reverse part
for (i=left;i<right+1;i++)
{
if (Pdata[i]<pdata[i-1])
{
ITEMP = Pdata[i];
Pdata[i] = pdata[i-1];
Pdata[i-1] = itemp;
t = i;
}
}
right = T-1;
}while (Left<=right);
}
void Main ()
{
int data[] = {10,9,8,7,6,5,4};
Bubble2sort (data,7);
for (int i=0;i<7;i++)
cout<<data[i]<< "";
cout<< "n";
}
2.SHELL sorting
This sort of sorting is very complex, see the program will know.
First you need a descending step, where we use 9, 5, 3, 1 (the last step must be 1).
The principle is to first sort all of the 9-1 elements apart, and then use the same method to sort the 5-1 elements apart
By the second analogy.
#include <iostream.h>
void Shellsort (int* pdata,int Count)
{
int step[4];
Step[0] = 9;
STEP[1] = 5;
STEP[2] = 3;
STEP[3] = 1;
int itemp;
int k,s,w;
for (int i=0;i<4;i++)
{
k = Step[i];
s =-K;
for (int j=k;j<count;j++)
{
ITEMP = Pdata[j];
W = j-k;//The subscript for the step element
if (S ==0)
{
s =-K;
s++;
Pdata[s] = itemp;
}
while ((itemp<pdata[w) && (w>=0) && (W<=count))
{
PDATA[W+K] = pdata[w];
W = w-k;
}
Pdata[w+k] = itemp;
}
}
}
void Main ()
{
int data[] = {10,9,8,7,6,5,4,3,2,1,-10,-1};
Shellsort (data,12);
for (int i=0;i<12;i++)
cout<<data[i]<< "";
cout<< "n";
}
Oh, the program looks a bit headache. But it's not hard to get rid of the s==0, it's easier to avoid using 0
Code that is written by the step size that causes the program to be abnormal. This code I think is worth a look.
The algorithm is named because its inventor's name is D.l.shell. According to the reference, "due to complex mathematical reasons
Avoid using the power step of 2, it can reduce the efficiency of the algorithm. In addition the complexity of the algorithm is 1.2 power of N. Also because it is very complex and
"Beyond the scope of this book" (I do not know the process), we have only the result.

General ordering based on template:
MyData.h file
///////////////////////////////////////////////////////
Class CMyData
{
Public
CMyData (int index,char* strdata);
CMyData ();
Virtual ~cmydata ();
int m_iindex;
int Getdatasize () {return m_idatasize;};
Const char* GetData () {return m_strdatamember;};
Operators are overloaded here:
cmydata& operator = (CMyData &srcdata);
BOOL Operator < (cmydata& data);
BOOL operator > (cmydata& data);
Private
char* M_strdatamember;
int m_idatasize;
};
////////////////////////////////////////////////////////
MyData.cpp file
////////////////////////////////////////////////////////
Cmydata::cmydata ():
M_iindex (0),
M_idatasize (0),
M_strdatamember (NULL)
{
}
Cmydata::~cmydata ()
{
if (M_strdatamember!= NULL)
Delete[] M_strdatamember;
M_strdatamember = NULL;
}
Cmydata::cmydata (int index,char* strdata):
M_iindex (Index),
M_idatasize (0),
M_strdatamember (NULL)
{
M_idatasize = strlen (strdata);
M_strdatamember = new Char[m_idatasize+1];
strcpy (M_strdatamember,strdata);
}
cmydata& cmydata::operator = (cmydata &srcdata)
{
M_iindex = Srcdata.m_iindex;
M_idatasize = Srcdata.getdatasize ();
M_strdatamember = new Char[m_idatasize+1];
strcpy (M_strdatamember,srcdata.getdata ());
return *this;
}
BOOL Cmydata::operator < (cmydata& data)
{
Return m_iindex<data.m_iindex;
}
BOOL Cmydata::operator > (cmydata& data)
{
Return m_iindex>data.m_iindex;
}
///////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////
Main program Section
#include <iostream.h>
#include "MyData.h"
Template <class t>
void Run (t* pdata,int left,int right)
{
int i,j;
T middle,itemp;
i = left;
j = right;
The following comparisons call our overloaded operator function
Middle = pdata[(left+right)/2]; Find Middle Value
do{
while ((Pdata[i]<middle) && (i<right))//left-scan is greater than the median value
i++;
while ((Pdata[j]>middle) && (j>left))//scan from right is greater than the median value
j--;
if (I&LT;=J)//found a pair of values
{
Exchange
ITEMP = Pdata[i];
Pdata[i] = Pdata[j];
PDATA[J] = itemp;
i++;
j--;
}
}while (I&LT;=J)//If the subscript scanned on both sides is staggered, stop (once)
When the left part has a value (left<j), recursive left half
if (LEFT&LT;J)
Run (PDATA,LEFT,J);
When the right part has a value (right>i), recursive right half
if (right>i)
Run (pdata,i,right);
}
Template <class t>
void QuickSort (t* pdata,int Count)
{
Run (pdata,0,count-1);
}
void Main ()
{
CMyData data[] = {
CMyData (8, "xulion"),
CMyData (7, "Sanzoo"),
CMyData (6, "Wangjun"),
CMyData (5, "vckbase"),
CMyData (4, "jacky2000"),
CMyData (3, "cwally"),
CMyData (2, "Vcuser"),
CMyData (1, "Isdong")
};
QuickSort (data,8);
for (int i=0;i<8;i++)
cout<<data[i].m_iindex<< "" <<data[i]. GetData () << "/n";
cout<< "n";
}

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.