For a beginner, the author of the solutions manual too much detail to the reader, here to do their own efforts to give some of the detailed exercises:
The wrong place, welcome to correct me.
1. The following functions are arranged by growth rate: N,√2, N1.5,n2,nlogn, Nloglogn,nlog2n,nlog (N2), 2/n,2n,2n/2,37,n2logn,n3. Indicate which functions grow at the same rate of growth.
A: Arrange the following 2/n < PNS <√2 < N < Nloglogn < Nlogn < Nlog (N2) < nlog2n < N1.5 < N2 < N2logn < N3 < 2N/2 < 2N.
Of these, Nlogn and Nlog (N2) have the same growth rate, both O (Nlogn).
Supplementary note: a) wherein nlog2n and N1.5, N3 and 2N/2 the size of the determination of the relationship can be continuous use of the law (n->∞, the limit of two function ratios, equal to their respective derivative after the limit of the ratio). Of course, it is simpler to ask the square directly on both sides.
b) Pay attention to a common rule: k,logkn = O (N) for any constant. This indicates that the logarithm is growing very slowly. In Exercise 3 we will prove a more rigorous form of it.
2, Function Nlogn and N1+ε/√logn (ε>0) which grow faster?
Analysis: Our first consideration may be the use of the law of Lofa, but for the second function its upper and lower parts contain variables, it is difficult to take the derivative (of course, it is not impossible, it is troublesome, if you would like to set y = N1+ε/√logn, and then to the two sides to take the logarithm). Here we will take advantage of contradiction:
To be Nlogn < N1+ε/√logn, the certificate Logn < Nε/√logn. Since the use of the Nε/√logn, then we assume that the < Logn. The logarithm on both sides has Ε/√logn Logn < Loglogn. namely Ε√logn < Loglogn. Set t = Logn, then ε√t < Logt <=>ε2t < LOG2T, which is obviously inconsistent with 1 B in succession, so assume Nε/√logn < Logn is not established. So the function N1+ε/√logn grows faster.
3, prove to any constant K,LOGKN = O (N).
Solution: To prove the proposition is established, just limn->∞ (logkn/n) = 0. The proof is as follows:
First, if K1 < K2, there is obviously logk1n = O (logk2n). and k = 0 o'clock, LOGKN = 1. Apply the law of limn->∞ (login/n) = limn->∞ (ILOGI-1N/NLN2) = limn->∞ (logi-1n/n). (Note: The constant is discarded here because we assume that the final ratio of the function is 0, otherwise it is not easy to discard the constant.) Of course, if the ratio is not zero, we need to return to this place to deal with, in fact, the process is also constantly trying to adjust, blocked, separate it:) that is, through constant statute, the ultimate limit of the ratio equals zero, the proposition is proved.
4. Ask two functions f (n) and g (n) to make F (n) ≠o (g (n)) and g (n) ≠o (f (n)).
An obvious example is the function f (n) = Sinn,g (n) = Cosn.
5. Suppose that a random permutation of the first n natural numbers needs to be generated. For example, {4,3,1,5,2} and {3,1,4,2,5} are legal permutations, but {5,4,1,2,1} is not, because the number 1 appears two times and the number 3 does not appear. This program is often used to simulate some algorithms. We assume that there is a random number generator Randint (i, j), which generates an integer between I and J at the same probability. Here are three algorithms:
1) Enter an array from a[0] to a[n-1] as follows: To fill in a[i], generate a random number until it differs from the generated a[0],a[1], ..., a[i-1], and then fill it in a[i].
2) with algorithm 1), but to save an additional array, called the Used (used) array. When a random number Ran was initially placed in array a, used[ran] = 1. This means that when a random number is filled in with a[i], one step can be used to test whether the random number is already in use, rather than an algorithm like an I-step test.
3) Fill in the array so that a[i] = i + 1. And then:
for(i = 1; i < N; i++)
Swap(&A[i], &A[RandInt(0, i)]);
For each algorithm give you the desired run time analysis (with large O) that you can get as accurately as possible.
Solution: Analysis, for 1), easy to write the following algorithm:
for(i = 0; i < N; i++){
while(1){
A[i] = RandInt(1, N);
for(j = 0; j < i; j++)
if(A[j] == A[i])
break;
if(j == i)
break;
}
}
The probability that a random number generation function is called is different from the previously generated random number (n-i)/n, then the generation of N/(n-i) random numbers in theory can be determined to be 1 with the generated probability. Therefore, the expected run time of the algorithm is
,。
Of course, you can also zoom in on the numerator while narrowing the denominator, but this time limit is O (N2), obviously not as accurate as the above practice. One thing to note here is that the first n of the harmonic series and, it is divergent, are used much more frequently in computer science than in other subjects such as mathematics. The following gives the reconcile and:
Its approximate error is r = 0.5772156649, which is called Euler's constant (Euler's constant).
for 2), it is easy to write the following algorithm:
for(i = 0; i < N; i++){
while(1){
A[i] = RandInt(1, N);
if(Used[i] == 0){
Used[i] = 1;
break;
}
}
}
with 1), its operating time limit is obviously O (Nlogn).
For 3), the run Time is O (N), do not say much.
6. Record an algorithm called Horner Law, which is used to calculate the value of F (X) =σi=0~naixi.
Poly = 0;
for(i = N; i >= 0; i--)
Poly = X * Ploy + A[i];
7. Give an effective algorithm to determine if integer i is present in an array of integers A1 < A2 < A3 < ... < an, so that Ai = i. What is the running time of your algorithm?
Analysis: Similar to two-point lookup, directly on the code:
int (int [] a, int N) {
int low = 0, high = N-1; middle;
Ranch
while (low <= high) {
middle = ((high-low) >> 1) + low;
Ranch
if (a [middle]) <middle + 1)
low = middle + 1; // search right space
else if (a [middle]> middle + 1)
high = middle-1; // search for space
else
return middle;
}
Ranch
return -1;
}
It is easy to know that the running time of the algorithm is O (LOGN).
8, if the statement in the 7 question low = middle + 1 is more low = middle, then this program can also run correctly?
A: no, set low = N,high = n + 1, then middle = N, the program is trapped in a dead loop.
9, A. Write a program to determine if the positive integer n is a prime number, and how much time is your program running in the worst case scenario (N)? (You should be able to write O (√n) algorithm program).
B. The number of bits in binary notation that makes B equal to N. What is the value of B?
C. What is your program's worst-case run time (in B)?
D. The comparison determines whether the number of a 20 (binary) bit is prime and determines whether the number of a 40 (binary) bit is the run time of the prime.
E. Use N or B to give a more reasonable run time, why?
Solution: For a, because √n *√n = n, there must be an integer less than √n when decomposing N. The efficient algorithm idea is: first, test whether n can be divisible by 2, can not test whether n can be divisible by 3,5,7,...,√n. The code is as follows (the prime number returns 1, not the return 0):
int IsPrime(int N){
int i;
if(N == 1)
return 0;
if(N % 2 == 0)
return 0;
for(i = 3; i <= int(sqrt(w) + 0.5); i += 2)
if(N % i == 0)
return 0;
return 1;
}
For B, obviously there is, B = O (LOGN).
For C, because B = O (logn), 2B = O (N), that is, 2B/2 = O (√n), the worst-case run time in B is: O (2B/2)
For D, the running time of the latter is the square of the former running time, which is easily known by the solution in C.
For E,wiss said: B is the better measure because it more accurately represents the size of the input.
All Rights Reserved.
Author: Haifeng :)
Copyright © xp_jiang.
Reprint please indicate the source: http://www.cnblogs.com/xpjiang/p/4143743.htm
Reference: Data structures and algorithm analysis in C (second edition) Solutions Manual_mark Allen Weiss_florida International U Niversity.
"Data structure and algorithm analysis: C Language Description _ Original Book Second Edition" CH2 Algorithm analysis _ After class exercises _ part of the solution