2-3-1:
2-3-2:
MERGE2(A,p,q,r) n1=q-p+1 n2=r-q let L[1..n1]and R[1...n2] be new arrays for i=1 to n1 L[i]=A[p+i-1] for j=1 to n2 R[j]=A[q+j] i=1 j=1 k=1 while i<=n1 and j<=n2 if L[i]<=R[j] A[k]=L[i]i=i+1 else A[k]=R[j]j=j+1 k=k+1 while i>n1 and j<=n2A[k]=R[j]k=k+1j=j+1 while i<=n1 and j>n2 A[k]=L[i]k=k+1i=i+1
2-3-4:
// Enter a [1... n] RECURSIVE-INSERTON-SORT (A, n) if n> 1 recursive-Insertion-sort (A, n-1) key = A [n] I = n-1while I> 0 and a [I]> key a [I + 1] = A [I] I = i-1A [I + 1] = Key
The above recursive version is basically the same as the iterative version.
T (n) = theta (1) n = 1 or 2
= T (n-1) + theta (n) n> 2
2-3-5:
// The following are two versions of binary search, recursive version and iterative version. // The parameter is a [low .. high] and the value viteractive-binary-search (A, V, low, high) while low <= high mid = floor (low + high)/2) to be searched) if v = A [Mid] return mid else if v> A [Mid] Low = mid else high = mid return nil recursive-binary-search (A, V, low, high) If low> high return nilmid = floor (low + high)/2) if v = mid return midelse if v> A [Mid] Return recursive-binary-search (A, V, mid, high) else return recursive-binary-search (, v, low, mid)
C ++ implementation (iterative version ):
// Requires that t be a comparable template <typename T> int binary_search (T * a, t v, int low, int high) {int mid = 0; // calculates the median, initialization;/* The condition here is: High-low must be at least 1. If the condition is low <= high, it may be an endless loop * For example, a [5] = {0, 1, 2, 3, 4, 5} search for 3.5 */while (high-low> 1) {mid = (low + high)/2; If (V = A [Mid]) return mid; else if (V> A [Mid]) Low = mid; else high = mid;} return-1; // indicates not found}
Based on the recursive version, write the recursive formula:
T (n) = T (n/2) + theta (1) Draw a recursive tree, T (n) = theta (lgn)
2-3-6:
No. Because the size of the search location can be found within the lgj time, but the time for moving the replication element is J
2-3-7:
Well, sort theta (nlgn), then for each element I: 1 to n, make y = x-s [I], and then in S [1 .. n] search for Y at the top two points, which is also theta (nlgn) in total ). Therefore, the total value is Theta (nlgn)
Http://www.cnblogs.com/liao-xiao-chao/articles/2351925.html though
Here is a better method.
Questions:
2-1:
Pseudocode:
MERGR-SORT(A,p,r) if (r-p)>k q=floor( (p+r)/2) MERGE-SORT(A,p,q) MERGE-SORT(A,q+1,r) MERGE(A,p,r) else INSERTION-SORT(A,p,r)
The insertion-sort used here is different from the insertion-sort used earlier, because the parameters here change:
Const int K = 5; // here K gets 5 template <typename T> void insertion_sort (T * a, int low, int high) {int key = 0; // key initialization for (Int J = low + 1; j <= high; ++ J) {// insert a [J] into the working key = A [J]; int I = J-1; while (I >=low & A [I]> key) {A [I + 1] = A [I]; I = I-1 ;} A [I + 1] = Key ;}} void merge_sort (int * a, int P, int R) {If (r-p> = 5) {int q = (p + r)/2; merge_sort (A, p, q); merge_sort (A, q + 1, R); merge (A, p, q, r) ;}else {insertion_sort (A, P, R );}}
The specific code of merge (A, P, Q, R) can be found in the previous article "merge order".
A. proof: The insertion sorting complexity is Theta (N ^ 2), set T (n) = cn ^ 2, when the length is K, T (K) = CK ^ 2; there are N/K segments in total, so t (n) '= (N/K) * (CK ^ 2) = C * NK, so the complexity is Theta (NK );
B. Proof: At this time, the bottom layer has a total of N/K segments. How many layers are there in total? 2 ^ h * k = n => H = Log (N/K) the merge of each layer is Theta (N). Therefore, the total time complexity of the merge is Theta (NLG (N/K ));
C. t (n) = time when the last n/K segments are inserted and sorted + total merge Time = theta (NK + nlog (N/K ))
D. When there are many good input conditions, K gets a smaller value. When there are many poor input conditions, K gets a smaller value (this answer is excerpted ...)
2-2 :( bubble sort ):
Pseudo code:
BUBBLE-SORT(A) for i= 1 to A.length-1 for j=A.length downto i+1 if A[j]<A[j-1] exchange A[j] with A[j-1]
A. I think it is possible that the elements in a' [1. N] are composed of elements in a [1. N ].
B .2-4 rows: at the beginning of each loop, the element in a [J] is the smallest element in a [j... A. Length ].
Proof:
Initialization. J = A. length. This is correct;
Persistence:
Termination: ends when J = I. In this case, a [I] is the smallest element in a [I... A. Length;
C.1-4 rows of the cycle unchanged: at the beginning of each cycle, a [1 .. the element in I-1] is a [1 .. n] the smallest element in the front I-1, and a [1 .. i-1] ordered.
Proof:
Initialization: I = 1; true
Persistence: for I, 2-4 rows ensure that the last a [I] is the smallest element of a [I. a. Length;
Termination: I = A. length. A [1... A. Length-1] has been sorted and all are smaller than a [A. Length]. Therefore, the Sorting Algorithm is correct;
D. Theta (N ^ 2)
2-3 (Horna rules ):
Solution:
Y = 0
For I = n downto 0
Y = ai + x * y
A. Theta (N)
B. simple polynomial evaluation:
y=a0for i= 1 to n xi=1 for j=0 to i xi=xi*xy=y+ai*xi
The running time is Theta (N ^ 2), and the performance is far inferior to that of the Horna rule.
C. When it is terminated, I =-1; when it is brought in, the conclusion is drawn.
C ++ code:
// Parameter A [0... n-1], n is the length, coef is the parameter array, and X is the unknown template <typename T> t horner (T * coef, t x, int N) {t y = T (0); For (INT I = n-1; I> = 0; -- I) {Y = coef [I] + x * Y ;} return y ;}
2-4 (reverse order ):
A.
I: 1 2 3 4 5
A: <2 3 8 6 1>
When I = 1, find J> 1 and a [J] <A [I] J = 5, <>
When I = 2, search for j> 2 and a [J] <A [2] J = 5 <>
I = 3, j> 3 and a [J] <A [3] J = <3, 4> <3, 5>
I = 4 j> 4 and a [J] <A [4] J = 5 <4>
B. array elements are arranged in descending order, with N * (n-1)/Two reverse order Pairs
C. for a [J], search for a [1... j-1] is less than the element of a [J], if a [1... j-1] has been in ascending order, then, this is the insertion of the sort of insert process, so that the reverse order of more, the insertion of the sort of the slow; insertion of the sort is to eliminate the reverse order of the process;
D. pseudo code:
// Merge and sort the values. Add the reverse order pairs in the left array, the reverse order pairs in the right array, and the reverse order pairs in the right array to the left array to count-inversion (A, P, r) inversions = 0if P <r q = floor (p + r)/2) inversions = inversions + count-inversion (A, p, q) inversions = inversions + count-inversion (A, q + 1, R) inversions = inversions + merge-inversion (A, P, Q, R) return inversions merge-inversion (, p, Q, R) n1 = Q-p + 1 n2 = r-Q Let L [1... n1 + 1] and R [1 .. n2 + 1] Be new Arrays for I = 1 to N1 L [I] = A [p + I-1] for j = 1 to N2 R [J] = A [q + J] L [N1 + 1] = inf R [n2 + 1] = inf I = 1 j = 1 inversions = 0 for k = P to R if l [I]> r [j] inversions = inversions + n1-i + 1 // note that if C ++ is implemented here, inversions + = n1-i; because c ++ starts from 0 A [k] = R [J] J = J + 1 else if l [I] <= R [J] a [k] = L [i] I = I + 1 return Inversions
Here, you only need to slightly change the Merge Sorting and do not upload C ++ code.