[Algorithm] The Java data structure and algorithm for interviewing __ storage

Source: Internet
Author: User
Tags benchmark sorts tidy

Finding and sorting algorithms are the introductory knowledge of algorithms, and their classical ideas can be used in many algorithms. Because of its short implementation code, the application is more common. So in the interview often ask the sorting algorithm and its related problems. But same, as long as familiar with the thought, flexible use is not difficult. Generally in the interview is the most frequent test is a quick sort and merge sort, and often have the interviewer asked to write the two sorting code. The code for both of these sorts must be handy. There are insert sort, bubble sort, heap sort, cardinal sort, bucket sort, etc.

Interviewers may want to compare their pros and cons, the ideas of various algorithms, and their use scenarios for these sorts of orders. There is also the time and space complexity to analyze the algorithm. The usual search and sorting algorithm is the beginning of the interview, if these questions are not answered well, it is estimated that the interviewer did not continue to interview the interest is gone. So want to open a good head to the common sorting algorithm thought and its characteristics to be proficient in mastering, if necessary, skilled writing code.

Next, we will analyze the common sorting algorithms and their usage scenarios. Limited to space, some algorithms for detailed demonstration and illustration, please find a detailed reference.

Bubble Sort

Bubble sort is one of the simplest sorts, whose general idea is to exchange small numbers to the front by comparing and exchanging them with neighboring elements. The process is similar to a bubble rising, so it gets its name. Take a chestnut and bubble sort the unordered sequence of 5,3,8,6,4. First from the back bubble, 4 and 6 compared to the 4 exchange to the front, the sequence into 5,3,8,4,6. Likewise 4 and 8 are exchanged, which become 5,3,4,8,6,3 and 4 without swapping. 5 and 3 in exchange, into 3,5,4,8,6,3. This bubble is over, the smallest number 3 to the front. The rest of the sequence bubbles in order to get an ordered sequence. The time complexity of the bubble sort is O (n^2).

Implementation code:

/**
 * @Description: Bubble sort Algorithm Implementation
 *
/public class Bubblesort {public

    static void Bubblesort (int[) arr) {
        if (arr = null | | arr.length = 0) return
            ;
        for (int i=0. i) {for
            (int j=arr.length-1; j>i; j--) {
                if (arr[j]<arr[j-1)) {
                    swap (arr, j-1, j);
                }
            }
        }
    }

    public static void Swap (int[] arr, int i, int j) {
        int temp = arr[i];
        Arr[i] = arr[j];
        ARR[J] = temp;
    }
}

Or simply understand a little bit of the forward sort

public class Bubblesort {public

    static void Bubblesort (int[] arr) {
        if (arr = null | | arr.length = 0)
            return ;
        for (int i=1;i<arr.length-1;i++) {for
            (int j=0; j>arr.length-i; J + +) {
                if (arr[j]>arr[j+1)) {
                    swap (Arr, j+1, j);

    }}} public static void Swap (int[] arr, int i, int j) {
        int temp = arr[i];
        Arr[i] = arr[j];
        ARR[J] = temp;
    }
}

Select Sort

The idea of choosing a sort is actually a bit like bubbling sort, which is to put the smallest element to the front after a sort. But the process is different, and the bubble sort is through neighboring comparisons and exchanges. and the choice of sorting is through the selection of the whole. Give a chestnut, to 5,3,8,6,4 this unordered sequence to choose a simple sort, the first to select the smallest number of 5 and 5 exchange, that is, select 3 and 5 exchanges, a sort after the 3,5,8,6,4. Select and exchange the remaining sequences one at a time and end up with an ordered sequence. In fact, the choice of ordering can be considered as the optimization of bubble sort, because its purpose is the same, only the choice of sorting only in the determination of the minimum number of the premise of the exchange, greatly reducing the number of exchanges. The time complexity for selecting a sort is O (n^2).

Implementation code:

/**
 * @Description: A simple choice of sorting algorithm implementation
 *
/public class Selectsort {public

    static void Selectsort (int[) arr) { C4/>if (arr = = NULL | | | arr.length = 0) return
            ;
        int minindex = 0;
        for (int i=0; i//only need to compare n-1 times
            minindex = i;
            for (int j=i+1; j//is compared from i+1, because minindex defaults to I, I don't need to.
                if (Arr[j]  Arr[minindex]) {
                    minindex = j;
                }
            }

            if (Minindex!= i) {//If Minindex is not I, the description finds a smaller value, swapping it.
                swap (arr, I, minindex);
            }
        }

    public static void Swap (int[] arr, int i, int j) {
        int temp = arr[i];
        Arr[i] = arr[j];
        ARR[J] = temp;
    }

}

Insert Sort

The insertion order is not achieved by swapping the location but by comparing the insertion of the element to the appropriate location. I believe we all have played poker experience, especially the larger number of cards. In the card may have to tidy up their own cards, the number of times how to finish the card. is to get a card and find a suitable position to insert. This principle is actually the same as the insertion sort. Take a chestnut, to 5,3,8,6,4 this unordered sequence to simply insert the sort, first assume the first number of positions right, think about the first card in the time, there is no need to tidy. Then 3 to the 5 front, the 5 to move one, into the 3,5,8,6,4. Think about it when it comes to finishing cards. Then 8 does not move, 6 inserts in 8 front, 8 moves one position, 4 inserts in 5 front, starts from 5 to move backwards one position. Note that when inserting a number, make sure that the number in front of the number is ordered. The time complexity of a simple insert sort is also O (n^2).

Implementation code:

/**
 * @Description: Simple insertion sort algorithm implementation
/public class Insertsort {public

    static void Insertsort (int[] arr) {
        if (arr = null | | arr.length = 0) return
            ;

        for (int i=1; i//assuming the first number position is correct; to move backwards, you must assume the first.

          int J = i;
            int target = Arr[i]; To be inserted

            //post-Move while
            (J > 0 & Target]) {
                arr[j] = arr[j-1];
                J--;
            }

            Insert 
            arr[j] = target;}}

Quick Sort

The quick sort one listens to the name to feel very high-end, in the actual application fast sorting is indeed also the best performance sorting algorithm. The quick sort, though high-end, is actually thought to come from bubble sort, bubble sort is the smallest bubbling to the top by comparison and exchange of adjacent elements, and the quick sort is to compare and exchange decimal and large numbers, so that not only the decimal bubbles to the top, but also the large number down below.

Give a chestnut: to 5,3,8,6,4 this unordered sequence to quickly sort, the thought is the right pointer to find than the benchmark number of small, the left pointer to find than the benchmark number of large, exchange.

5,3,8,6,4 uses 5 as the benchmark for comparison, and eventually moves the 5 small to the left of 5, moving to the right of 5 over 5 big.

5,3,8,6,4 first set the I,j two pointers to each end, the J pointer scans first (think about why.) ) 4:5 small Stop. Then I scan, 8:5 large stop. Swap i,j position.

5,3,4,6,8 then the J-pointer scans again, when the J-Scan 4 o'clock two pointers meet. Stop it. Then swap 4 and benchmark numbers.

4,3,5,6,8 once divided to reach the left than 5 smaller, right than the 5 larger purpose. Then the left and right subsequence are sorted recursively, and the ordered sequence is finally obtained.

There's a problem here. Why must the J-pointer move first? First of all this is not absolute, depending on the position of the base number, because when the last two pointers meet, exchange the datum number to the location of the meeting. Generally choose the first number as the base number, then it is on the left, so the number of the last meeting to exchange with the benchmark number, then the number of encounters must be smaller than the benchmark number. So the J pointer moves first to find a number smaller than the base number.

The fast sort is unstable and its time average time complexity is O (NLGN).

Implementation code:

/** * @Description: Implement a fast sort algorithm/public class QuickSort {//once divided public static int partition (int[) arr, int left, I
        NT right) {int pivotkey = Arr[left];

        int pivotpointer = left;
            while (left right) {while [left = PivotKey] right-;
            while (left PivotKey) left + +; Swap (arr, left, right);
        Switch the big to the right and the small to the left. Swap (arr, pivotpointer, left);
    Finally, the pivot exchange to the middle return left;
        public static void QuickSort (int[] arr, int. left, int right) {if (left >= right) return;
        int pivotpos = partition (arr, left, right);
        QuickSort (arr, left, pivotPos-1);
    QuickSort (arr, pivotpos+1, right);
        public static void sort (int[] arr) {if (arr = null | | arr.length = = 0) return;
    QuickSort (arr, 0, arr.length-1); public static void Swap (int[] arr, int. left, int right) {int temp = arr[Left];
        Arr[left] = Arr[right];
    Arr[right] = temp; }

}

In fact, the above code can be optimized, the above code in the benchmark number has been saved in the PivotKey, so do not need to set a temp variable each exchange, in exchange for the left and right pointer when only need to cover on it. This can reduce the use of space and reduce the number of assignment operations. The optimized code is as follows:

/** * @Description: Implement a fast sort algorithm/public class QuickSort {/** * @param arr * @param left * @p  Aram Right * @return/public static int partition (int[] arr, int. left, int right) {int pivotkey =

        Arr[left];
            while (left right) {while [left = PivotKey] right-; Arr[left] = Arr[right];
            Move the small to left side PivotKey + +; Arr[right] = Arr[left]; Move the big to the right} Arr[left = PivotKey;
    Finally, the pivot is assigned to the middle return left; /** * Recursive Partitioning subsequence * @param arr * @param left * @param right/public static void Quicksor
        T (int[] arr, int left, int right) {if (left >= right) return;
        int pivotpos = partition (arr, left, right);
        QuickSort (arr, left, pivotPos-1);
    QuickSort (arr, pivotpos+1, right); public static void sort (int[] arr) {if arr = null | |Arr.length = = 0) return;
    QuickSort (arr, 0, arr.length-1); }

}

Summed up the idea of a quick sort: bubble + two points + recursive divide and conquer, slowly realize ...

Heap Sort

Heap sorting is a sort of selection that is implemented by a heap, with the idea of sorting with simple choices, and the following with a large top heap as an example. Note: Use a large top heap if you want to sort in ascending order, using a small top heap instead. The reason is that the heap top element needs to be swapped to the tail of the sequence.

First, there are two problems to be solved by implementing heap sequencing:

How to make a heap from a unordered sequence key.
How to adjust the remaining elements to become a new heap after outputting the heap top element.

The first problem is that a linear array can be used to represent a heap, and a heap constructed from the initial unordered sequence requires a bottom-up adjustment from the first NON-LEAF element to a heap.

Second question, how to adjust piles. The first is to swap the top element of the heap with the last element. It then compares the left and right child nodes of the current heap top element because, in addition to the current heap top element, the left and right child heap satisfies the condition, you need to select the current heap top element to swap with the larger (large top heap) of the left and right child nodes until the leaf node. We call this from the top of the pile from the leaf adjustment to become screened.

The process of building a heap from a unordered sequence is a process of repeated filtering. If this sequence is considered a complete binary tree, then the last non terminal node is the N/2 element, which can be filtered. Give me a chestnut:

The process of constructing the initial heap and adjusting the heap sort of the 49,38,65,97,76,13,27,49 sequence is as follows:


Implementation code:

/** * @Description: The implementation of heap sorting algorithm, take the big top heap as an example.
     * * public class Heapsort {/** * heap filtering, start~end satisfies the definition of a large top heap in addition to start.
     * After adjustment start~end is called a large top heap. * @param arr to adjust array * @param start pointer * @param end pointer/public static void Heapadjust (int[) arr, in

        t start, int end) {int temp = Arr[start];
                for (int i=2*start+1 i) {//left and right children's nodes are 2*i+1,2*i+2//select the left and right children smaller subscript if (i)) { 
            i + +;
            } if (temp >= arr[i]) {break;//already a large top heap, = maintain stability. } Arr[start] = Arr[i]; Move the child node up start = i; Next round filter} Arr[start] = temp; Insert the correct location} public static void Heapsort (int[] arr) {if (arr = = NULL | | arr.length = 0) retur

        n;
        Create a large top heap for (int i=arr.length/2; i>=0; i--) {heapadjust (arr, I, arr.length-1); for (int i=arr.length-1; i>=0; i--) {swap (arr, 0, i);
        Heapadjust (arr, 0, i-1);
        } public static void Swap (int[] arr, int i, int j) {int temp = Arr[i];
        Arr[i] = Arr[j];
    ARR[J] = temp; }

}

Hill Sort

Hill sort is an efficient implementation of the insert sort, also called narrowing the incremental sort. In a simple insert sort, if the ordered sequence is positive, the time complexity is O (n), and if the sequence is basically orderly, it is highly efficient to use the direct insertion sort. Hill sort took advantage of this feature. The basic idea is: first, the entire backlog of records to be divided into a number of sub sequences for direct insertion of the order, the entire sequence when the records are basically orderly and then a direct insertion of the whole record of the order.

Give me a chestnut:

As you can see from the sorting process above, the feature of Hill sorting is that the composition of the subsequence is not a simple piecemeal segmentation, but rather a subsequence of a certain increment between the records. As in the example above, the first order increment is 5, and the second order increment is 3. Because the key words recorded in the first two-trip insertion sort are compared to the keywords in the previous record in the same subsequence, as a result, the record of smaller keywords is not moved forward one step at a time, but jumps forward so that the whole sequence is basically ordered when the last order is made. As long as the recording of a small amount of comparison and movement can be. So the efficiency of hill sorting is higher than the direct insertion sort.

The analysis of Hill sort is complex, and time complexity is a function of increment, which involves some mathematical problems. But on the basis of a lot of experiments, when n is in a certain range, the time complexity can reach O (n^1.3).

Implementation code:

/**
 * @Description: Hill sort Algorithm Implementation */public
class Shellsort {

    /**
     * Hill sort of a trip insert
     * @param arr
     to be sorted array * @param d Increment
     *
    /public static void Shellinsert (int[] arr, int d) {for
        (int i=d i) {
            int j = i-d;
  
   int temp = arr[i];    Record the data to be inserted  
            while (j>=0 & arr[j]>temp) {  //forward, find the position of the number smaller than it   
                arr[j+d] = arr[j];    Move Backwards  
                J-= D;  
            }  

            if (J!= i-d)    //existence is smaller than the number 
                arr[j+d] = temp;

        }
    }

    public static void Shellsort (int[] arr) {
        if (arr = = NULL | | arr.length = 0) return
            ;
        int d = ARR.LENGTH/2;
        while (d >= 1) {
            Shellinsert (arr, d);
            D/= 2;}}


  

Merge Sort

Merge sorting is a different sort of method, because merge sorting uses the idea of recursive division, so it is easier to understand. The basic idea is to recursively divide the child problem and then merge the results. The order sequence is considered to be a sequence of two sequences, then two subsequence are merged, and then the handle sequence is considered by two ordered sequences. In reverse, it's actually a 22 merger, then a 44 merger ... Finally, an ordered sequence is formed. The space complexity is O (n), and the time complexity is O (NLOGN).

Give me a chestnut:

Implementation code:

/** * @Description: Implementation of merge Sort algorithm * * public class MergeSort {public static void MergeSort (int[] arr) {Msort (arr
    , 0, arr.length-1); /** * Recursive partition * @param arr array * @param left pointer * @param right pointer/public static Voi
        D msort (int[] arr, int left, int right) {if (left >= right) return;

        int mid = (left + right)/2; Msort (arr, left, mid); Recursive sort left msort (arr, mid+1, right); Recursive sort right merges (arr, left, Mid, right-hand); Merge}/** * Merge two ordered arrays * @param arr to be merged array * @param left left-hand pointer * @param mid middle pointer * @param RI Ght Right pointer */public static void merge (int[] arr, int left, int mid, int right) {//[left, Mid] [mid+1, Righ T] int[] temp = new Int[right-left + 1];
        Intermediate array int i = left;
        Int J = mid + 1;
        int k = 0;
            while (I right) {if (Arr[i] arr[j]) {temp[k++] = arr[i++];
}            else {temp[k++] = arr[j++];
        } while (I mid) {temp[k++] = arr[i++];
        while (J right) {temp[k++] = arr[j++];
        for (int = p) {arr[left + p] = temp[p]; }

    }
}

Count Sort

If an interviewer asks you to write an O (n) time complexity ranking algorithm in an interview, you must not immediately say: this is not possible. Although the lower bound of the previous sort based on comparison is O (Nlogn). But there is also a linear time complexity of the order, but the prerequisite is that the number to be sorted to meet a certain range of integers, and counting the sort of need for more auxiliary space. The basic idea is to count the number of each number by using the numbers to be sorted as the subscript of the counting array. Then the ordered sequence can be obtained by outputting sequentially.

Implementation code:

/**
 * @Description: Counting sort Algorithm Implementation
 *
/public class Countsort {public

    static void Countsort (int[) arr) {
        if (arr = null | | arr.length = 0) return
            ;

        int max = max (arr);

        Int[] count = new int[max+1];
        Arrays.fill (count, 0);

        for (int i=0 i) {
            count[arr[i]] + +;
        }

        int k = 0;
        for (int i=0 i) {for
            (int j=0; j) {
                arr[k++] = i;
            }

    } public static int Max (int[] arr) {
        int max = Integer.min_value;
        for (int ele:arr) {
            if (Ele > Max)
                max = ele;
        }

        return max;
    }

Bucket Sort

Bucket sort is a kind of improvement and generalization of counting sort, but there are many data on the Internet that confuse counting and bucket ordering. In fact, bucket sorting is much more complex than counting.

The analysis and explanation of the bucket sort learn from this brother's article (with changes): http://hxraid.iteye.com/blog/647759

The basic idea of bucket sort:

Suppose you have a set of K[1....N keyword sequences of length n. First, the sequence is divided into the sub range (bucket) of M. And then, based on some kind of mapping function, to map the keyword K of the sequence to the first bucket (that is, the subscript I of bucket array b), the keyword K is used as an element in b[i] (each bucket b[i] is a set of sequences of size n/m). It then sorts all the elements in each bucket b[i] (you can use a quick row). And then enumerate the entire contents of the output B[0]....B[M] as an ordered sequence. Bindex=f (key) where the Bindex is the subscript of the bucket array B (that is, the Bindex bucket), and k is the keyword for the sequence to be arranged. The key to the efficiency of bucket sequencing is the mapping function, which must be: if the keyword K1

Give me a chestnut:

If the sequence k= {49, 38, 35, 97, 76, 73, 27, 49} is to be arranged. These data are all between 1-100. So we customize 10 barrels and then determine the mapping function f (k) =k/10. The first keyword 49 is positioned in the 4th bucket (49/10=4). All the keywords are stacked into buckets in turn, and are sorted quickly in each Non-empty bucket as shown in the figure. As long as the data in the sequential output of each b[i] can get an ordered sequence.

Bucket Sorting Analysis:

Bucket ordering utilizes the mapping relationship of functions, reducing almost all comparison work. In fact, the calculation of the F (k) value of the bucket sort, which is equivalent to the division of the Fast Row, the subsequence in Hill sort, the sub problem in the merging sort, has already segmented a large amount of data into the basic ordered data block (bucket). Then only need to do a small amount of data in the bucket to do advanced comparison sorting.

The time complexity for bucket ordering of n keywords is divided into two parts:

(1) Cyclic calculation of the bucket mapping function for each keyword, this time complexity is O (N).

(2) Using the advanced comparison sort algorithm to sort all the data in each bucket, its time complexity is ∑o (Ni*logni). Where NI is the amount of data for the first bucket.

It is clear that part (2) is the determining factor in the performance of the bucket sorting. Minimizing the amount of data in the bucket is the only way to improve efficiency (because the best average time complexity based on a comparison sort can only reach O (N*logn)). Therefore, we need to try to do the following two points:

(1) The Mapping function f (k) can allocate N data evenly to M buckets so that each bucket has a [n/m] amount of data.

(2) Increase the number of barrels as much as possible. In the extreme case, each bucket can only get one data, which completely avoids the "compare" sort operation of the data in the bucket. Of course, it is not easy to do this, the large amount of data in the case, the F (k) function will make the bucket set a large number of heavy space waste. This is a trade-off between time and space costs.

For N rows of data, M buckets, the average time complexity of the bucket order for each bucket [n/m] data is:

O (N) +o (M (n/m) log (n/m)) =o (N+n (LOGN-LOGM)) =o (N+NLOGN-N*LOGM)

When N=m, that is, when there is only one data per bucket in the limit case. The best efficiency of bucket sequencing can be achieved by O (N).

Summary: The average time complexity of bucket ordering is linear O (n+c), where c=n* (LOGN-LOGM). If the bucket number m is larger than the same N, its efficiency is higher and the best time complexity reaches O (n). Of course the space complexity of the bucket sort is O (n+m), if the input data is very large, and the number of barrels is very large, the space cost is undoubtedly expensive. In addition, the bucket sort is stable.

Implementation code:

/** * @Description: Bucket sorting algorithm Implementation */public class Bucketsort {public static void Bucketsort (int[)

        ARR) {if (arr = = null & arr.length = 0) return; int bucketnums = 10; This defaults to 10, which stipulates the number of rows [0,100) list> buckets = new arraylist> (); The index of the bucket for (int i=0 i) {Buckets.add (new LinkedList ())//////////////////
        R (int i=0 i) {Buckets.get (f (arr[i])). Add (Arr[i)); ///To sort each bucket for (int i=0 i) {if (!buckets.get (i) IsEmpty ()) {collections.so RT (Buckets.get (i));
        Fast row for each bucket}//restore ordered array int k = 0; for (List bucket:buckets) {for (int ele:bucket) {
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.