Basic idea:
Each group of keywords in the pending data is sequentially assigned to the bucket.
Specific examples:
278, 109, 063, 930, 589, 184, 505, 269, 008, 083
We divide the digits of each number, 10 bits, and hundreds into three keywords: 278-> k1 (single-digit) =8,k2 (10-bit) =7,k3= (hundred) = 2.
Then from the lowest bit start (starting with the most important word), all the data K1 keyword bucket allocation (because, each number is 0-9, so the bucket size is 10), and then output the bucket of data to get the following sequence.
930, 063, 083, 184, 505, 278, 008, 109, 589, 269
Then the above sequence is followed by the bucket allocation for K2, and the output sequence is:
505, 008, 109, 930, 063, 269, 278, 083, 184, 589
Finally, for the K3 bucket allocation, the output sequence is:
008, 063, 083, 109, 184, 269, 278, 505, 589, 930
Efficiency analysis:
The performance of cardinality sorting is slightly worse than that of bucket sorting. The time complexity of O (n) is required for each key bucket allocation, and the time complexity of O (n) is required to get new keyword sequences after allocation. If the backlog of data can be divided into D-keywords, then the time complexity of the cardinality order will be O (d*2n), of course D is much less than N, so basically still linear level. The spatial complexity of the cardinality order is O (n+m), where M is the number of buckets. Generally n>>m, so extra space needs about n around.
However, compared to the bucket sort, the cardinality order is not more than the number of buckets needed each time. And the cardinality order requires almost no "comparison" operations, and when the bucket is sorted in a relatively small bucket, multiple data in the bucket must be sorted based on the comparison operation. Therefore, in practical application, the application scope of Cardinal order is more extensive.
For Example:
Let's say we have some two tuples (a,b) to sort them by a primary keyword, B's secondary keyword. We can first sort them by first keyword, and divide them into several heaps with the same first keyword. Then, each heap is sorted separately according to the secondary key values. Finally, the stacks are concatenated together to make the top-most-keywords smaller. The cardinality sort in this way is called the MSD (Most significant dight) sort.
The second way is to sort from the lowest valid keyword, called LSD (least significant dight). First sort all the data by the secondary key, then sort all the data according to the primary keyword. It is important to note that the sorting algorithm used must be stable, otherwise the result of the previous order will be canceled. The LSD method is often simpler and less expensive than MSD because it does not need to be sorted separately for each heap. The methods described below are all based on LSD.
In general, the cardinality sort needs to be sorted by count or bucket. When you use count sorting, you need an order array. When using bucket sorting, you can use the method of linked list to find out the order of ordering directly. Here is a section of the program that sorts the two-tuple cardinality by bucket:
#include <iostream> #include <cstdio> #include <cstdlib> #include <cmath> #include <cstring&
Gt
using namespace Std;
struct Data {int key[2];
struct Linklist {linklist *next;
data value;
linklist (Data v,linklist *n): Value (v), Next (n) {} ~linklist () {if (next) delete next;};
void Bucketsort (data *a,int n,int k,int y) {linklist *bucket[101],*p;//build bucket int i,j,k,m;
m=k/100+1;
memset (bucket,0,sizeof (Bucket));
for (i=1;i<=n;i++) {k=a[i].key[y]/m;////put each element in a in the corresponding bucket bucket[k]=new linklist (a[i],bucket[k));
for (k=j=0;k<=100;k++) {for (p=bucket[k];p; p=p->next) j + +; for (p=bucket[k],i=1;p;p=p->next,i++) a[j-i+1]=p->value;
Remove each element of the bucket from the delete bucket[k];
} void Radixsort (data *a,int n,int K) {for (int j=1;j>=0;j--)//from low priority to high priority LSD Bucketsort (A,N,K,J);} int main ()
{int n=100,k=1000,i;
Data *a=new data[n+1];
for (i=1;i<=n;i++) {A[i].key[0]=rand ()%k+1; A[i].kEy[1]=rand ()%k+1;
} radixsort (A,n,k);
for (i=1;i<=n;i++) printf ("(%d,%d)", a[i].key[0],a[i].key[1]);
printf ("\ n");
return 0;
}
The cardinal order is a kind of algorithm used in the old card-piercing machine. A card has 80 columns, each of which can be perforated at any one of the 12 locations. The sequencer can be mechanically "programmed" to examine a column in each stack, and then divide it into 12 boxes according to the location of the perforation. This allows the operators to collect them individually. One of the first positions perforated in the top, the second position perforation of the second, and so on.
For a limited number of decimal digits, we can think of it as a multivariate group, from the high to the low key point of the importance of descending. You can use cardinality sorting to sort a limited number of decimal digits.