Algorithm series--sorting algorithm summarizing __ sorting algorithm

Source: Internet
Author: User
Sort Categories sort by whether in memory

Depending on whether the records to be sorted in the sort process are all placed in memory, the sort is divided into: inner and outer sort.

For internal sorting, the performance of the sorting algorithm is mainly affected by 3 aspects:
Time performance, auxiliary space, algorithmic complexity. according to the algorithm to achieve the complexity of classification Simple Algorithm

Bubble sort, simple selection ordering, and direct insertion ordering are improved algorithms for simple algorithms

Hill sort, heap sort, merge sort, fast sort belong to the improved algorithm. analysis and implementation of various sorting algorithms

Note: All sort keys are sorted in ascending order by custom. Bubble Algorithm

Bubble sort (Bubble sort) is a sort of interchange, its basic idea is: 22 compare the keywords of adjacent records, if the reverse order is exchanged, until there is no reverse sequence of records. Base bubbling Algorithm

Bubble sort, smaller number floating, larger number to sinking
    //Average Time complexity O (n²) space complexity O (1) stable public
    void Bubblesort (int[] nums) {
        int len = Nums.length;
        int temp = 0;
        The outer layer controls the current bubbling interval range i∈[0,len-2]
        for (int i=0;i<=len-2;i++)
            //inner layer from back forward, J∈[i,len-2] for
            (int j=len-2;j>=i; j--) {
                if (Nums[j] > nums[j + 1]) {
                    temp = nums[j];
                    NUMS[J] = nums[j + 1];
                    Nums[j + 1] = temp;
                }
            }
        Print ("bubble sort", nums);
    }
Optimize bubble sort

For example, for keywords such as 1,2,3,5,4, you only need to exchange 5,4. The rest of the exchange is meaningless. So we can add an identity bit swap that performs the next bubbling process only when the swap=true occurs.
The key to code churn is to increase the judgment on whether swap is true in the for loop of the I variable. As a result of this improvement, the bubble sort has a few enhancements in performance, avoiding meaningless loops of judgment in an already orderly situation.

Bubble sort, smaller number floating, larger number to sinking
    //Average Time complexity O (n²) space complexity O (1) stable public
    void Bubblesort (int[] nums) {
        int len = Nums.length;
        int temp = 0;
        Exchange Identification
        Boolean swap = true;
        Perform up to len-1 bubble process
        for (int i = 0; I <= len-2 && swap; i++) {
            //inner layer from back forward, j∈[i,len-2]
            swap = false;
  for (Int j = len-2 J >= i; j--) {
                if (Nums[j] > nums[j + 1]) {
                    temp = nums[j];
                    NUMS[J] = nums[j + 1];
                    Nums[j + 1] = temp;
                    Swap = true;
        }} Print ("bubble sort", nums);
    }
Degree of complexity analysis

When the best case, that is to order the table itself is ordered, then we compare the number of times, according to the last improved code, you can infer that the comparison of n-1 times, no data exchange, time complexity of O (n). When the worst-case scenario is where the sorted table is in reverse order, it is necessary to compare the Sigma (i=2, N, i-1) =1+2+3+...+ (n-1) =n (n-1)/2 times, and to move the record at the same order of magnitude. Therefore, the total time complexity is O (n²).
The space complexity is O (1). Simple selection sorting algorithm

Simple Selection sort is the smallest record in the n-i+1 record, which is exchanged with the first (1≤i≤n) record, through a comparison between n-i. Algorithm Implementation

The Java implementation algorithm is as follows

Simple choice of sorting, the basic idea is to choose the minimum value from the remaining elements, put to place, the Exchange
    //Average Time complexity O (n²) space complexity O (1) unstable public
    void Selectsort (int[) nums) {
        int Minindex, MinValue;
        int temp;
        The outer loop does not have to execute to the last for
        (int i = 0; i < nums.length-1 i++) {
            minvalue = nums[i];
            Minindex = i;
            Find out [I.. Len], starting from i+1 for
            (int j = i + 1; j < Nums.length; J + +) {
                if (nums[j) < MinValue) {
                    MinValue = NUMS[J];
                    Minindex = j;
                }
            }
            Execute Exchange
            if (MinValue!= i) {
                //execute exchange
                temp = nums[i];
                Nums[i] = Nums[minindex];
                Nums[minindex] = temp;
            }
        }
        Print ("Simple selection sort", nums);
    }
Analysis of algorithm complexity

From the simple choice of the process of ordering, it is the biggest feature of the exchange of mobile data is relatively few times, so that the corresponding time savings. Analysis of its time complexity found that regardless of the best worst-case scenario, the number of comparisons is the same, and the I-trip sort requires a comparison of the n-i, at this time the Sigma (I=1, n-1, n-i) = (n-1) + (n-2) +...+1=n (n-1)/2 times are needed. And for the Exchange times, when the best, the exchange for 0 times, the worst time, also in the initial descending order, the number of exchanges is n-1, based on the final sorting time is the sum of the comparison and exchange, therefore, the total time complexity is still O (N2).

It should be said that, although the same as O (n²) with the bubble sort, the simple choice of sorting performance is slightly better than the bubble sort. Because in reverse order, the number of exchanges to bubble sort is greater than the simple selection sort.
Space Complex for O (1) Direct insertion Sort algorithm

The basic operation of the direct insertion sort (Straight insertion sort) is to insert a record into an ordered table that has been sorted so as to get a new ordered table with a record number of 1. Program Implementation

Direct insertion sort, [] constantly reading elements and expanding the original set
    //Average Time complexity O (n²) space complexity O (1) stable public
    void insertsort (int nums[]) {
        //cur Save the current element
        int cur;
        The index inner loop indexing eventually points to the previous position of the insertion position
        int index;
        for (int i = 1; i < nums.length i++) {
            cur = nums[i];
            index = i-1;
            Moves the element larger than Cur and finds the insertion position while
            (index >= 0 && cur < Nums[index]) {
                Nums[index + 1] = Nums[index];
  index--;
            }
            Note Index+1
            Nums[index + 1] = cur;
        }

        Print ("Insert sort directly", nums);
    }
Analysis of algorithm complexity

When the best case, that is to sort the table itself is ordered, such as the keyword sorted by {2,3,4,5,6}, compared (n-1) sigma (i=2, n, 1) = (n-1) * (n-1) times, because there is no moving records, time complexity of O (n).

When the worst is the case, that is, the sorted table is in reverse order, such as {6,5,4,3,2}, at which time you need to compare the Sigma (i=2, N, i) =2+3+...+n= (n+2) (n-1)/2 times, and the record number of moves also reached the maximum Sigma (i=2, n, i+1) = (n+ 4) (n-1)/2 times.

If the sorting record is random, then the average comparison and the number of moves are about N2/4 times according to the same probability principle. Therefore, we conclude that the time complexity of the direct insertion sort method is O (n²). It is also seen from here that the same O (n²) time complexity, the direct insertion sort method is better than the bubble and simple selection sort, because the bubble sort has more swap operations in reverse order.
The space is complex for O (1). Hill Sorting algorithm

Hill sort is an improvement on the direct insertion sort. The core idea is to divide the sorting records, to reduce the number of records to be sorted and to develop the whole sequence to a basic order. The strategy of jumping segmentation is to make a subsequence of a "increment" D, so as to ensure that the result of the direct insertion in the subsequence is basically orderly rather than local order. selection of increment D

The choice of "increment" here is very critical. It is still a mathematical problem to choose what increment is best, and no one has yet found the best incremental sequence. However, a lot of research shows that when the increment sequence is Dlta[k]=2t-k+1-1 (0≤K≤T≤LOG2 (n+1)), it can obtain good efficiency, its time complexity is O (N3/2), better than the direct ordering O (n²). It is important to note that the last increment value of the increment sequence must be equal to 1. In addition, because the record is a jump-type movement, hill sorting is not a stable sort algorithm. Algorithm Implementation

Here, select the incremental d=n/2 D=N/4 ... d=1

Hill sort, select increment interval, direct insertion sort within range, selection of improved version//increment for direct insertion sort such as D=len d/=2 public void Shellsort (int[] nums) {int len = num
        S.length;
        int d = len;
            while (true) {//increment reduced to 1/2 D/= 2;
            Cur saves the current element int cur;
            The inner Loop Index of the POS finally points to the first position of int POS at the position to be inserted; Outermost traversal of each group for (int i = 0; i < D; i++) {//Initial position i+d step is D for (int j = i + D J < Len;
                    j = = d) {cur = nums[j];
                    pos = j-d;
                        Moves the element larger than cur to the insertion position, noting that the step is D while (POS >= i && cur < nums[pos]) {
                        Nums[pos + d] = Nums[pos];
                    pos-= D;
                //Note POS point to the last position of the final insertion position Nums[pos + d] = cur;
        }//This is put to the last to ensure that the d==1 insert sort executes if (d = = 1) break; } priNT ("Hill sort", nums); }
Degree of complexity analysis

It has to do with incremental selection, and when the increment is right, the time complexity can be increased to O (N3/2)
Better than O (n²). The space complexity is O (1). Heap Sorting algorithm

Heap ordering (Heap sort) is a way to sort by using heaps (assuming a large top heap). Its basic idea is to construct the sequence to be sorted into a large top heap. At this point, the maximum value of the entire sequence is the root node of the top of the heap. Remove it (in fact, swap it with the end element of the heap array, at which point the end element is the maximum), and then reconstruct the remaining n-1 sequences into a heap, which will get the secondary large values in n elements. So repeated execution, you can get an ordered sequence.

One of the most critical operations is two, a heap (assuming a large-heap) operation and a swap operation. A heap operation is to construct all the elements in a [0,len-1-i] interval in turn into a large top heap, at which point the heap top element nums[0] is the maximum value, and then the heap top element nums[0] and Nums[len-1-i] are exchanged. Program Implementation

Build heap Operations

Create a large top heap, swap the maximum value to the root node (index 0) private void Buildmaxheap (int[] data, int lastindex) {//start from lastindex parent node F
            or (int i = (lastIndex-1)/2; I >= 0; i--) {//save current index int k = i;
                If the current K-node has child nodes, it is necessary to find the maximum value in itself and child nodes and Exchange if ((2 * k + 1) <= lastindex) {//Zoozi node index
                int Leftindex = 2 * k + 1;
                int rightindex = Leftindex + 1; The left and right nodes are all there, if it is lastindex, then lastindex is K's right-hand node if (rightindex <= lastindex) {//Zoozi node Max, Contains two nodes that are equal and are greater than K if (Data[k] < Data[leftindex] && Data[leftinde
                        X] >= Data[rightindex]) swap (data, k, leftindex); Right child node Max else if (Data[k] < Data[rightindex] && Data[rightindex
               ] > Data[leftindex]) Swap (data, k, rightindex); ///Only Zoozi node else {//Only Zoozi node is greater than K to perform exchange if (DATA[LEFTI
                Ndex] > Data[k]) Swap (data, k, leftindex);
        Exchange function private void Swap (int[] data, int i, int j) {int temp = Data[i];
        Data[i] = Data[j];
    DATA[J] = temp;
 }

Main program

Heap sort, construct complete binary tree (maximum is large top heap, smallest is small top heap)
    //Average Time complexity O (n*logn) space complexity O (1) unstable public
    void Heapsort (int[] nums) {
        int len = nums.length;
        Loop build heap
        for (int i = 0; i < len; i++) {
            buildmaxheap (nums, len-1-i);
            Swap the heap top and the last element, after the next build team gets the second big element
            swap (nums, 0, len-1-i);
        }
        Print ("Heap sort", nums);
    }
Analysis of algorithm complexity

Its running time is mainly consumed by the initial build heap and the repeated filtering on the rebuild heap.

In the process of building the heap, because we are the complete binary tree from the lowest rightmost node to start building, comparing it to its children and, if necessary, a maximum of two comparison and interchange operations for each non-terminal node, the time complexity of the entire build heap is O (n).

In the formal sort, the first time the heap top records the rebuild heap requires the time of O (Logi) (the distance from a node of the complete binary tree to the root node), and the n-1 of the heap top record is required, so the time complexity of the rebuild heap is O (Nlogn).

So overall, the time complexity of heap sorting is O (Nlogn). Because heap sorting is not sensitive to the ordering state of the original record, it is either the best, the worst, and the average time complexity of O (Nlogn). This is clearly far better in performance than bubble, simple selection, direct insertion of O (n²) time complexity.

In space complexity, it has only one temporary unit for exchange, and the complexity is O (1). Because the comparison and exchange of records is carried out in leaps and bounds, heap ordering is also an unstable sort method.
Because the initial build heap requires more frequent comparisons, heap sorting is not appropriate for the number of sorted sequences. Merge Sort Algorithm

Merge sort (merging sort) is a sort method that is realized by merging ideas. The principle is that if the initial sequence contains n records, it can be regarded as N ordered subsequence, the length of each subsequence is 1 and then 22 is merged to obtain an ordered sequence of 2 or 1 of the length of the |n/2| (|x| representing the smallest integer not less than x), and then 22 merges, ..., so repeated, Until a sequential sequence of length n is obtained, this sort method is called the 2-way merge sort.
Merging and sorting is a typical idea decomposition: decomposing the original problem into a series of sub-problems
Solve each child problem recursively. If the child problem is small enough, it is solved directly, and finally the result of the sub problem is merged into the solution of the original problem.
The core operation has two: Two ordered array merge operation, the original problem of the split operation. Algorithm Implementation

Merge operation

Merge operation [Low,mid] [Mid+1,high]
    private void merge (int[] nums, int[] arr, int low, int mid, int high) {
        int i = l ow;
        Int J = mid + 1;
        int k = 0;
        while (I <= mid && J <=) {
            if (Nums[i] < nums[j]) {
                arr[k] = nums[i];
                i++;
            } else {
                Arr[k] = nums[j];
                j + +;
            }
            k++;
        }
        Processing the remaining while
        (I <= mid) {
            arr[k] = nums[i];
            i++;
            k++;
        }
        while (J <= High) {
            arr[k] = nums[j];
            j + +;
            k++;
        }
        Copy to the original array for
        (int index = 0; index < K; index++) {
            Nums[low + index] = Arr[index];
        }
    }

Split operation of Recursive invocation

The Division rule, recursive call
    private void Msort (int[] nums, int[] arr, int low, int high) {
        if (min < high) {
            int mid = + high)/2;
            Msort (Nums, arr, Low, mid);    Left ordered
            msort (Nums, arr, mid + 1, high);//Right ordered
            merge (Nums, arr, Low, Mid, high);//Then merge two ordered sequence numbers
        }
    }

Main program

Merge sort, partition method,
    //Time complexity O (NLOGN), Space complexity O (n), stable public

    void MergeSort (int[] nums) {

        int[] arr = new int[ Nums.length];
        Msort (nums, arr, 0, nums.length-1);
        Print ("merge sort", nums);

    }
Analysis of algorithm complexity

Let's analyze the time complexity of the merge sort, which requires a 22 merge of ordered sequences with the adjacent length of h in the nums. And put the results in the original array, this requires scanning all records in the sequence to be sorted so that the O (n) time is consumed, and it is clear from the depth of the complete binary tree that the entire merge sort needs to be log2n (2 base), so the total time complexity is O (NLOGN), And this is the best, worst, average time performance in the merge sort algorithm.

Because the merging sequence needs the same amount of storage space as the original record sequence to store the merging result and the stack space with log2n depth, the space complexity is O (N+LOGN). In addition, there is an if (sr[i) Quick Sort in the merge function

The basic idea of quick sort is: by sorting the rows to be separated into two separate parts, some of which are less critical than the other, the records can be sorted separately to achieve the order of the whole sequence.
It is also a typical idea of divide and conquer, and the big problem is decomposed into sub problems. There are so many key operations. Pivot selection of the Operation Getpivot, choose which keyword to be divided; a sort operation, according to the selected pivot of the sorting partition function, recursive call interval segmentation operation. Algorithm Implementation

Select Pivot

To get the pivot, select the first
    private int getpivot (int[] nums, int low, int high) {return
        nums[low];
    }

A trip to sort operations

The partition function will nums as a pivot element, [left <=]pivot[<= right]
    private int partition (int[] nums, int low, int high) {
        // Take pivot element
        int pivot = Getpivot (nums, Low, high);
        while (lower < high) {
            //from the rear to look for the first element less than pivot while
            (Low < high && Nums[high] >= pivot)
                high--;< c7/>//Swap swap
            (nums, Low, high);
            In the past, find the first element greater than pivot while
            (Low < high && Nums[low] <= pivot)
                low++;
            Swap
            Swap (nums, low, High);
        }
        Low=high final pivot position in this return low
        ;
    }

Interval partition operation

Recursive call
    private void Qsort (int[] nums, int low, int high) {
        //This condition is indispensable, recursive termination condition
        if (low >= high)
            return;
  int pivot = partition (Nums, Low, high);
        Left interval recursive ordering
        qsort (nums, Low, pivot-1);
        Right interval recursive sort
        qsort (nums, pivot + 1, high);

    }

Main program Call Portal

Fast sort
    //Average Time complexity O (nlogn) space complexity O (1) unstable public
    void QuickSort (int[] nums) {
        qsort (nums, 0, Nums.length-1 );
        Print ("Quick sort", nums);
    }
Analysis of algorithm complexity

It can be concluded from the mathematical calculation that in the optimal case, the time complexity of the fast sorting algorithm is O (NLOGN).
In the worst case, the sequence to be sorted is either positive or reverse, with each partition having only one subsequence smaller than the previous one, noting that the other is empty. If a recursive tree is drawn, it is a slanted tree. A n-1 recursive call is required at this point, and the I partition requires a comparison of the N-i keyword to find the first record, which is the position of the pivot, so the number of comparisons is Sigma (I=1, n-1, n-i) = (n-1) + (n-2) +...+1=n (n-1)/ 2, the time complexity of the final is O (n²).
The average case, by mathematical induction, can be proved by its order of magnitude O (Nlogn).

From the point of view of space complexity, it is mainly the use of stack space due to recursion, best case, the recursive tree depth is log2n, its space complexity is also O (LOGN), the worst case, need to carry out n-1 recursive call, its space complexity is O (n), average situation, space complexity is also O (Logn).

Because keyword comparisons and exchanges are jumps, fast sorting is an unstable sort method. Quick Sort Optimization 1. Optimize pivot selection

It is unreasonable to select the first pivot in the case that the sequence to be sorted is basically orderly, resulting in many invalid operations.

random Selection of Pivot method
Randomly obtained a number of RND between low and high, so that its keyword l.r[rnd] and l.r[low] Exchange, which is called the random selection pivot method. To some extent, the performance bottleneck is solved for the fast ordering of the basic ordered sequences.

three method of number-taking

For the further improvement of the stochastic selection method, the optimal random selection of the extreme conditions.
That is, take three keywords first sorting, the middle number as a pivot, generally take the left, right end and the middle three number, can also be randomly selected. So at least this middle number must not be the smallest or the largest number, the probability of taking three numbers is the smallest or the maximum number of possibilities is negligible, so the middle of the more intermediate value of the possibility of a significant increase. 2. Optimization of unnecessary exchange

The pivot element value is temporarily saved and the exchange operation is changed to a direct assignment operation. Then, when it was swap, it was just a replacement job, and eventually the pivot element was returned when low and high converged, where the pivot position was found. Because of this less than a lot of data exchange operations, in the performance has been part of the improvement. 3. Sorting scheme when optimizing decimal group

If the array is very small, a quick sort is better than a direct insertion sort (direct insertion is the best performance in a simple sort). The reason for this is that the quick sort uses recursive operations, which can be overlooked when a large number of data are sorted, as opposed to its overall algorithm advantage, but if the array has only a few records to sort by, it becomes a big problem for a large puffing out mosquito. So you can add a keyword to the number of thresholds to judge, when less than this threshold, the use of direct insertion of the order, greater than this threshold to use a fast sort. Thresholds can be selected either 7 or 50. There is information that 7 is more appropriate, but also think that 50 more reasonable, practical application can be adjusted appropriately.

private static final int max_length_insert_sort = 7;
 Improved qsort
    private void QSort1 (int[] nums, int low, int high) {
        //This condition is indispensable, recursive termination condition
        if (low >= high)
            retur n;
        if (Low-high > Max_length_insert_sort) {
            int pivot = Partition1 (nums, Low, high);
            Left interval recursive ordering
            qsort (nums, Low, pivot-1);
            Right interval recursive sort
            qsort (nums, pivot + 1, high);
        } else
            insertsort (nums);
    }
4. Optimize recursive operation

Recursion has a certain effect on performance, and the Qsort function has two recursive operations at its tail. If the sequence to be sorted is extremely unbalanced, the recursive depth will be approximated to n rather than the log2n of the equilibrium, which is not just a matter of speed or rapidity. The size of the stack is very limited, each recursive call will cost a certain amount of stack space, the more parameters of the function, each recursive cost more space. Therefore, if you can reduce recursion, it will greatly improve performance.

Recursive optimizations
    private void QSort2 (int[] nums, int low, int high) {
        //This condition is indispensable, recursive termination condition
        if (low >= high)
            return;< C4/>if (Low-high > Max_length_insert_sort) {
            while [low < high] {
                int pivot = Partition1 (nums, Low, high);
                //Left interval recursive sort
                qsort (nums, Low, pivot-1);
                Tail recursion Low
                = pivot + 1;
                The next loop is equivalent to a recirculation,//
                partition (L,low,high),
                //The effect is equivalent to "Qsort (L,pivot+1,high)}}"
        else
            Insertsort (nums);
    }

When we change the if to a while, because the first recursion, the variable low is useless, so you can assign the pivot+1 to low, then after the circulation, to a partition (Nums,low,high), the effect is equivalent to "qsort" (Nums, Pivot+1,high); ". The results are the same, but by using iterations rather than recursion, the stack depth can be reduced to improve overall performance. comparison and selection of various sorting algorithms

From the simplicity of the algorithm, we divide 7 kinds of algorithms into two categories:
Simple algorithm: Bubbling, simple selection, direct insertion.
Improved algorithm: Hill, heap, merge, fast. Comparison of time complexity average situation comparison

On average, it is clear that the last 3 improved algorithms are better than the hill sort, and far outweigh the first 3 simple algorithms. Best case Comparison

In the best case, it's better to bubble and direct-insert sort, which is to say, if your to-do list is always in order, you shouldn't consider 4 complex algorithms. worst case comparison

In the worst case scenario, heap sorting and merge sorting are stronger than fast sorting and other simple sorting. Comparison of space complexity

From the space complexity, the merging sort, the fast sort has the corresponding space request, instead the heap sort and so on space request is O (1). If memory is limited, do not choose merge sort and quick sort. Stability Comparison

In terms of stability, merge sequencing is a good algorithm for the applications that care very much about order stability. Select according to the characteristics of the data to be sorted number of sorted records

From the number of records to be sorted, the number of N to be sorted is smaller, the more appropriate to use a simple sort method. Conversely, the larger the N, the more appropriate to adopt an improved sorting method. This is why we have added a threshold for quick sort optimization, which is below the threshold for direct insertion sort. The keyword itself occupies a larger storage space

It seems that simple selection sorting has the worst performance in 3 simple sorts, it's not exactly, for example, if a record has a large amount of information (such as a dozens of-digit keyword), it means that it takes up a lot of storage space, and the more time it takes to move a record, So you should try to reduce the number of key words exchanged. We give a comparison of the moving times of 3 simple sorting algorithms,

The simple selection sort becomes very advantageous at this point, because it is the choice of a clear record to move with a large number of comparisons, with a minimum number of record exchanges. Therefore, for the data volume is not very large and the record of a large number of key word ordering requirements, simple sorting algorithm is an advantage. In addition, the amount of keyword information that is recorded has little effect on the four improved algorithms.
In short, from the comprehensive indicators, the optimization of the fast ordering is the best performance ranking algorithm, but different occasions we should also consider using different algorithms to deal with it. other Sort Count Sort

When the input element is an integer between n 0 to K, its elapsed time is theta (n + k). The count sort is not a comparison sort, and the order is faster than any comparison sort algorithm.
Because the length of the array C used to count depends on the range of the data in the array to be sorted (equal to the difference between the maximum and minimum values of the array to be sorted plus 1), this makes the counting order a large amount of memory for an array with a large range of data. Count sorting is the best algorithm for sorting numbers between 0 and 100, but it is not appropriate to sort names alphabetically. However, a count sort can use an algorithm in the cardinality sort to sort the array of large data ranges.
The steps of the algorithm are as follows:
1. Find the largest and smallest elements in the array to be sorted
2. Count the number of occurrences of an element of each value in the array, in the first item of the array C
3. Add to all counts (starting with the first element in C, and adding each item to the previous one)
4. Reverse populate the target array: Put each element I in the new array of the first C (i), each one element will be C (i) minus 1 algorithm implementation

//count sort public void Countsort (int[] nums) {int max = Integer.min_value;

        int min = Integer.max_value;
            Finds the maximum minimum for (int i = 0; i < nums.length i++) {max = Math.max (max, nums[i));
        min = math.min (min, nums[i]);

        } int[] Help = new Int[max-min + 1];
            Find out how many times each number appears for (int i = 0; i < nums.length i++) {int mappos = nums[i]-min;
        help[mappos]++;
        int index = 0;
                for (int i = 0; i < help.length. i++) {while (Help[i] > 0) {nums[index++] = i + min;
            help[i]--; } print ("

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.