The three-line must have my teacher Yan--"The Analects of Confucius"
An optimization algorithm for polynomial calculation
The hypothesis is a polynomial, the direct method for solving the polynomial is to calculate each item in PN (x), and then to find the sum of the results.
However, this method is not efficient because it requires n+ (n-1) + (n-2) +...+1=n (n-1)/2 multiplication, and we know that the multiplication is much less efficient than the addition operation, so whether
Is there a more efficient algorithm that can reduce the number of multiplication operations? This leads to the following "Honor's Rule".
The computational strategy of this change is called Honor's Rule, which only needs n addition and n multiplication to obtain the result, and greatly reduces the time complexity of the operation.
Assuming we already know,
So there are
The following polynomial calculation algorithm can be obtained by inductive method
Algorithm Honor
Input: constant term for polynomial a (n), a (n-1) ... A (1), a (0), and base X
Output: Polynomial calculation result Pn (x)
1. P=a (n)
2. For J =1 to N
3. P=x*p+a (N-J)
4. End for
5. Return P
optimization algorithm in exponential computation
To solve Y=x^n (n is an integer), the general direct calculation requires n multiplication.
And a more efficient algorithm is described as follows:
can make m= [N/2] (here "[]" means take off the whole), assuming we know how to calculate the X^m
Then there are two cases: ①n is even, then x^n= (x^m) ^2
②n is odd, then x^n= x* (x^m) ^2
Thus, a recursive algorithm for exponential computation is obtained:
Algorithm Exprec
Input: Real x and nonnegative integer n
Output: X^n
Power (X,n)
1. If M=0 then Y=1
2. Else
3. Y=power (X,[M/2])
4. y=y^2
5. If M is odd then Y=xy
6. endif
7. Return y
The optimization algorithm in numerical calculation