An in-situ replacement Sorting Algorithm in linear time

Source: Internet
Author: User

Copyright. For more information, see the source. Thank you!

An in-situ replacement Sorting Algorithm in linear time

Sorting Algorithm

The existing linear time sorting (time complexity is distinct (n) algorithms, such as counting sorting, base sorting, and bucket sorting, although the time complexity can be linear, however, in-situ replacement sorting is not allowed and additional auxiliary space is required. can an algorithm achieve the same time complexity as least (N) and space complexity as Ω (1?

We know that in the existing regular linear algorithms (such as counting sorting, bucket sorting, and base sorting), counting sorting and bucket sorting are not in-situ replacement sorting, therefore, additional auxiliary space is required for sorting. Therefore, the above purpose cannot be achieved, and the base algorithm itself is a composite algorithm, if the bit sorting algorithm can achieve the spatial complexity of Ω (1) and timeline (N), it can obviously achieve the above algorithm objectives. the bit sorting requirements in the base sorting must be stable. algorithms that meet linear time, constant space, and stability are not available in common algorithms.

We select base sorting as the basic algorithm to achieve this algorithm goal. Because we sort integers, the base sorting bit is the bit in the integer binary representation, and for the binary form of integers, the number on each element is only 0 and 1 (this is a very favorable factor), while the linear time for sorting the 0 and 1 sequences only needs to use 0 as the primitive, just do a quick partitioning. the time complexity of the algorithm is round (n), and it is also in-situ replacement sorting. The spatial requirements can be met (Ω (1 )), but the problem is that for the base sorting, the bit sorting must be stable, and the fast sorting is obviously not stable. However, if we can find a way to eliminate the impact of such instability so that it does not affect the final result, the problem will be solved. So how can we ensure that quick sorting does not affect the final result?

We know that base sorting has two basic forms: Sorting from low to high, and sorting from high to low. After analysis, we find that, if we adopt the way from high to low, it can make the fast sorting does not affect the final result, we consider the general situation, Set B (K), B (k-1 )...., B (1) indicates a base to sort all bits (for 32-bit integer k = 32), to sort the I-bit, and the first M-bit (M = k-i-1) B (k )... B (I + 1): When I-bit sorting is performed, if our fast sorting is performed only when the first M-bit is the same, the results of the quick sorting obviously do not affect the final results. in this way, the first fast row on the I-bit can be converted into a fast row division between several cells (with the first M digits for division, the first M-bit is the same as an interval), and because the first M-bit has sorted the order, it is clear that the same M-bit must be close together, in this case, although a fast partition is changed to a P fast partition (P <n), the overall time complexity does not increase, although index Backtracking is considered during specific algorithm techniques, the time complexity does not change. Because the most extreme Backtracking is less than or equal to N, the time complexity is at most 2n. however, for integers, the acquisition and comparison of the first M bit are very fast. Therefore, you can ignore the time complexity analysis, therefore, the overall sorting time complexity of the algorithm is still between 32 N and 64 N, and backtracking can also be eliminated through certain techniques, so the time complexity of the algorithm can be less than 32 N. because the space complexity of this bit algorithm is Ω (1), the entire algorithm meets the aforementioned algorithm goals.

The following is an algorithm implementation in the C # Language (in order to reduce the number of exchanges, bit sorting is not strictly based on the fast sorting, but based on the features of 0 and 1, use a pointer I to point to the sequence 1st 1, traverse the pointer first, exchange with I when it encounters 0, and I then shift one digit right, which can greatly reduce the number of exchanges ):

Private void bitsortanddelrepeatorsa (INT [])

{

// Obtain the array Length

Int then = A. length;

// Sorting starts from high to low. Here, the sorting starts from 31 bits, and 32 bits are not considered symbol bits,

// Or separately.

For (INT I = 31; I> = 1; I --)

{

// The value before the current sorting. The grouping is performed only when the value is the same. If the value is different,

// Re-start another fast-forward

// This is critical. Otherwise, the final result will be affected due to the instability of the fast sorting.

Int theprvcb = A [0]> (I );

// The start position of the quick sort, which will change

Int thes = 0;

// Insert point

Int thei = theS-1;

// Binary base, used to test whether a digit is 0

Int thebase = 1 <(I-1 );

// The bitwise primitive is always 0,

Int theaxbit = 0;

// Segment fast sorting, but the overall time complexity is the same as that of fast grouping.

For (Int J = 0; j <then; j ++)

{

 

// Obtain the bitwise value that has been sorted before the current array value.

Int thetmpprvcb = A [J]> (I );

// If the positions already exceeded are not the same, start the fast sorting again.

If (thetmpprvcb! = Theprvcb)

{

Thes = J;

TheI = thes-1;

Theaxbit = 0;

Theprvcb = thetmpprvcb;

J --; // start the row again and return to the first position.

Continue;

}

// If the previous number is the same, search for the first 1st 1, and Thi points to it

// If the values are the same, sort them in the quick order.

Int theaj = (a [J] & (thebase)> 0? 1: 0;

// If you start the row again, search for the first 1 and point thei to it.

// Reduce switching and speed up.

If (thei <thes)

{

If (theaj = 0)

{

Continue;

}

TheI = J; // continue ensures that J starts from thei + 1.

Continue;

}

// Switch.

If (theaj <= theaxbit)

{

Int thetmp = A [J];

A [J] = A [thei];

A [thei] = thetmp;

TheI ++;

}

}

}

}

Algorithm Analysis

Although the time complexity is 32 * n, because 32 is a constant, the time complexity of the entire algorithm is still random (N) and spatial complexity, the algorithm only uses a limited number of switching variables (<20), so the space complexity is Ω (1 ). However, in terms of algorithm stability, the bit algorithm is unstable, so the entire algorithm is also unstable.

The complexity of this algorithm is consistent with the algorithm analysis result.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.