Sorting method |
Worst-time analysis |
Average Time complexity |
Degree of stability |
Complexity of space |
Bubble sort |
O (N2) |
O (N2) |
Stability |
O (1) |
Quick Sort |
O (N2) |
O (N*LOG2N) |
Not stable |
O (log2n) ~o (n) |
Select sort |
O (N2) |
O (N2) |
Stability |
O (1) |
Binary Tree Sorting |
O (N2) |
O (N*LOG2N) |
Top |
O (N) |
Insert Sort |
O (N2) |
O (N2) |
Stability |
O (1) |
Heap Sort |
O (N*LOG2N) |
O (N*LOG2N) |
Not stable |
O (1) |
Hill sort |
O |
O |
Not stable |
O (1) |
1, time complexity
(1) Time frequency an algorithm takes time to execute, theoretically it can not be calculated, it must be run on the computer test to know. But we can not and do not need to test each algorithm, just know which algorithm spends more time, which algorithm spends less time on it. And the time that an algorithm spends is proportional to the number of executions of the statement in the algorithm, which algorithm takes more time than the number of statements executed. The number of times a statement is executed in an algorithm is called a statement frequency or time frequency. Note as T (N).
(2) time complexity in the time frequency mentioned just now, N is called the scale of the problem, and when N is constantly changing, the time frequency t (n) will change constantly. But sometimes we want to know what the pattern is when it changes. To do this, we introduce the concept of time complexity. Under normal circumstances, the number of iterations of the basic operation of the algorithm is a function of the problem size n, denoted by T (n), if there is an auxiliary function f (n), so that when n approaches infinity, the limit value of T (n)/f (n) is not equal to zero constant, then f (n) is the same order of magnitude function of t As T (n) =o (f (n)), called O (f (n)) is the progressive time complexity of the algorithm, which is referred to as the complexity of time.
in various algorithms, if the algorithm has a constant number of statement executions, the time complexity is O (1), in addition, the time frequency is different, the time complexity may be the same, such as T (n) =n2+3n+4 and T (n) =4n2+2n+1 their frequency, But the time complexity is the same, all O (N2). In order of magnitude increment, the common time complexity is: Constant order O (1), Logarithmic order O (log2n), linear order O (n), linear logarithmic order O (nlog2n), square order O (n2), Cubic O (n3),..., K-order O (NK), exponent-order (2n). With the increasing of the problem scale N, the complexity of the time is increasing and the efficiency of the algorithm is less. 2, spatial complexity is similar to the time complexity, the spatial complexity is the measurement of the storage space required when the algorithm executes in the computer. Note: S (n) =o (f (n)) We are generally talking about the size of the secondary storage unit in addition to the normal memory overhead. The discussion method is similar to the complexity of time, and is not discussed.
(3) The time performance of progressive time complexity evaluation algorithm is mainly used to evaluate the time performance of an algorithm based on the order of magnitude of time complexity (i.e., the asymptotic time complexity of the algorithm).
2. Similar to the discussion of time complexity, an algorithm's spatial complexity (space complexity) S (n) is defined as the storage space consumed by the algorithm, and it is also a function of the problem size n. Asymptotic spatial complexity is also often referred to as spatial complexity.
Spatial complexity (space complexity) is a measure of the amount of storage space that is temporarily occupied by an algorithm while it is running. The storage space occupied by an algorithm in the computer memory, including the storage space occupied by the storage algorithm itself, the storage space occupied by the input and output data of the algorithm and the storage space occupied by the algorithm in the running process three aspects. The storage space occupied by the input and output data of the algorithm is determined by the problem to be solved, which is passed by the calling function by the parameter table, and it does not change with the algorithm. Storage algorithm itself occupies the storage space and the length of the algorithm written in proportion, to compress the storage space, you must write a shorter algorithm. Algorithm in the running process of temporary occupied storage space varies with the algorithm, some algorithms only need to occupy a small amount of temporary work units, and does not vary with the size of the problem, we call this algorithm "in-place/", is to save the memory of the algorithm, as described in this section of the algorithm is so , some algorithms need to occupy the number of temporary work and solve the problem of the size of N, it increases with the increase of N, when n is large, will occupy more storage units, such as in the Nineth chapter described in the Quick Sort and merge sorting algorithm is the case.
If the spatial complexity of an algorithm is a constant, that is, it can be represented as O (1) when it is not changed by the size of N of the processed data, and when the spatial complexity of an algorithm is proportional to the logarithm of the base N of 2, it can be represented as 0 (10g2n), and when an algorithm's empty I-division complexity is linearly proportional to N, can be represented as 0 (n). If the parameter is an array, it is only necessary to allocate a space for it to store an address pointer transmitted by the argument, that is, a machine word space, and if the formal parameter is a reference, it is only necessary to allocate a space for it to store the address of the corresponding argument variable. To automatically reference the argument variable by the system.
Common internal sorting methods are: Swap sort (bubble sort, quick sort), select sort (Simple select sort, heap sort), insert sort (direct insert sort, hill Sort), merge sort, base sort (keyword, multi-keyword).
First, bubble sort:
1. Basic ideas:
22 compares the size of the data element to be sorted, and finds that two data elements are exchanged in reverse order until there are no reversed data elements.
2. Sorting process:
Imagine the sorted array r[1: N] vertically erected , each data element as a weight of the bubble, according to the light bubble can not be under the principle of heavy bubbles, from the bottom of the scan array r, where scanning to violate the principle of light bubbles, so that it "floating upward", so repeated, until the last two bubbles are light in the upper, The heavy person is down so far.
"Example":
49 13 13 13 13 13 13 13
38 49 27 27 27 27 27 27
65 38 49 38 38 38 38 38
97 65 38 49 49 49 49 49
76 97 65 49 49 49 49 49
13 76 97 65 65 65 65 65
27 27 76 97 76 76 76 76
49 49 49 76 97 97 97 97
Second, fast sorting (quick sort)
1. Basic ideas:
Any data element in the current unordered area R[1..h] as the "datum" of the comparison (as may be remembered as X), using this datum to divide the current unordered division into the left and right two smaller unordered areas: r[1..i-1] and r[i+1..h], and the left unordered sub-region data elements are less than or equal to the datum elements. The data elements in the unordered sub-region on the right are greater than or equal to the datum elements, while the Datum X is in the final sort position, that is, R[1..i-1]≤x.key≤r[i+1..h] (1≤i≤h), when R[1..i-1] and r[i+1..h] are not empty, respectively, they are divided Until all data elements in the unordered sub-extents are sorted.
2. Sorting process:
"Example":
initial keyword [49 38 65 97 76 13 27 49]
After the first Exchange [27 38 65 97 76 13 49 49]
After the second exchange [27 38 49 97 76 13 65 49]
J left scan, position unchanged, after third Exchange [27 38 13 97 76 49 65 49]
I scan right, position unchanged, after fourth Exchange [27 38 13 49 76 97 65 49]
J Left Scan [27 38 13 49 76 97 65 49]
(One-time partitioning process)
initial keyword [49 38 65 97 76 13 27 49]
After a trip sort [27 38 13] 49 [76 97 65 49]
Two times sorted [13] 27 [38] 49 [49 65]76 [97]
Three trips sorted after 13 27 38 49 49 [65]76 97
Final sorted results 13 27 38 49 49 65 76 97
Three, simple selection of sorting
1. Basic ideas:
Each trip selects the smallest (or largest) element from the data element to be sorted, placed in the final order of the ordered sequence, until all the data elements are sorted out.
2. Sorting process:
"Example":
initial keyword [49 38 65 97 76 13 27 49]
First trip sorted after 13 [38 65 97 76 49 27 49]
Second trip sorted after 13 27 [65 97 76 49 38 49]
Third trip sort after 13 27 38 [97 76 49 65 49]
Four-trip sort after 13 27 38 49 [49 97 65 76]
Five-trip sort after 13 27 38 49 49 [97 97 76]
Six-trip sort after 13 27 38 49 49 76 [76 97]
Seventh trip sort after 13 27 38 49 49 76 76 [97]
Last sorted results 13 27 38 49 49 76 76 97
Iv. Heap Sort (heap sort)
1. Basic ideas:
Heap sorting is a tree-shaped selection sort, in which the R[1..N] is considered as a sequential storage structure of a complete binary tree, and the smallest element is selected using the intrinsic relationship between the parent node and the child node in the complete binary tree.
2. Heap definition: n elements of sequence K1,k2,k3,..., Kn. is called a heap when and only if the sequence satisfies the attribute:
Ki≤k2i ki≤k2i+1 (1≤I≤[N/2])
A heap is essentially a complete binary tree that satisfies the following properties: Any non-leaf node in the tree has a keyword that is greater than or equal to its child's node point. For example, a sequence 10,15,56,25,30,70 is a heap, which corresponds to a complete binary tree as shown. The root node in this heap (called the heap top) has the smallest keyword, which we call a small Gan. Conversely, if the keyword of any non-leaf node in a complete binary tree is greater than or equal to the key word of the child, it is called the large root heap.
3. Sorting process:
Heap ordering is the use of small Gan (or large root heap) to select the small (or maximum) number of keywords in the current unordered region of the record implementation sort. We may as well use the big root heap to sort. The basic operation for each trip is to adjust the current unordered area to a large heap, select the top record of the heap with the largest keyword, and swap it with the last record in the unordered area. In this way, just as opposed to direct selection, the ordered area is formed at the tail of the original record area and gradually expands to the entire record area.
"Example": Building a heap on a keyword sequence 42,13,91,23,24,16,05,88
V. Direct insertion sort (insertion sort)
1. Basic ideas:
Each time a data element to be sorted is inserted into the appropriate position in the previously sorted sequence, the sequence remains orderly until the data element to be sorted is fully inserted.
2. Sorting process:
"Example":
[Initial key Words] [49] 38 65 97 76 13 27 49
j=2 (38) [38 49] 65 97 76 13 27 49
J=3 (65) [38 49 65] 97 76 13 27 49
J=4 (97) [38 49 65 97] 76 13 27 49
J=5 (76) [38 49 65 76 97] 13 27 49
J=6 (13) [13 38 49 65 76 97] 27 49
J=7 (27) [13 27 38 49 65 76 97] 49
J=8 (49) [13 27 38 49 49 65 76 97]
Vi. sort of Hill
1. Sorting ideas:
First, take a certificate of less than n D1 as the first increment, dividing all the records of the file into D1 groups. All records that are multiples of D1 are placed in the same group. First insert the sort directly within each group, and then take the second increment d2<d1 to repeat the above grouping and sorting until the increment dt=1 is taken, that is, all records are placed in the same group for direct insert sorting. This method is actually a grouping insertion method.
2. Sorting process:
[Initial keyword] 72 28 51 17 96 62 87 33 45 24
D1=n/2=5 62 28 33 17 24 72 87 51 45 96
D2=d1/2=3 17 24 33 62 28 45 87 51 72 96
D3=d2/2=1 17 24 28 33 45 51 62 72 87 96
Vii. Merge Sort
1. Sorting ideas:
Set two ordered sub-files (equivalent to the input heap) placed in the same vector adjacent to the position: R[low. M],r[m+1..high], merge them into a local staging vector R1 (equivalent to the output heap) and copy R1 back to R[low when the merge is complete. High].
2. Sorting process:
"Example":
Initial keywords [46][38][56][30][88][80][38]
After the first trip merge [38 46][30 56][80 88][38]
After the second merger [30 38 46 56][38 80 88]
Final merge Results [30 38 38 46 56 80 88]
Eight, the base sort
1. Sorting ideas:
(1) The data items are divided into 10 groups according to the value of the data item on the single digit;
(2) Then rearrange the 10 groups of data: All data ending in 0 is first, then the data item ending is 1, according to this order until the data ending with 9, this step is called the initial sub-order;
(3) In the second sub-order, all the data items are divided into 10 groups again, but this time is grouped according to the value of the data item ten. This grouping cannot change the previous sort order. That is, after the second order, the order of the data items remains the same from the inside of each set of data items;
(4) Then the 10 sets of data items are re-merged, the first is the 10-bit 0 data item, and then 10-bit 1 data items, so sorted until 10 bits on the data item.
(5) Repeat the process for the remaining bits, if some data items have fewer bits than other data items, then consider their highs to be 0.
2. Sorting process
Example
Initial keyword 421 240 035 532 305 430 124
First trip sorted [240 430] [421] [532] [124] [035 305]
Second trip sorted (305) (421 124) (430 532 035) (240)
Final sort results (035) (124) (240) (305) (421 430) (532)
This article is to reprint the content of the source forgot 、、、、、 embarrassed
Time complexity and spatial complexity of commonly used sorting algorithms