1. Defects of large number modulus exponentiation: the introduction of fast power modulus algorithm is proposed from the limitation of the naïve algorithm of large number of decimal modulus, in the naïve method we calculate a number such as 5^1003%31 is very consumption of our computational resources, the most troublesome in the whole calculation process is our 5^ 1003 This process disadvantage 1: In the process after we calculate the index, the calculated numbers do not take up, very occupy our computing resources (mainly time, there is space) disadvantage 2: We calculate the middle process of the large number of terror, our existing computer is no way to record such a long data, So we have to think of a more efficient way to solve this problem. 2. Introduction of Fast power: We start by optimizing the process of the optimization of our modular power algorithm 1. Naive modulo power operation process:
1 #define Ans=12for (int i=1; i<=b;i++)3{ 4 Ans*=A; 5 }
According to what we said above, this algorithm is very intolerable, we in the process of the calculation of the two shortcomings here are reflected here if we want to do optimization, I am fat is every process of adding a modulo operation, but we first have to remember that the modulo operation is very expensive memory resources, When the number of calculations is very large, we have no way to endure this time-consuming 2. Fast Power Introduction: Before we explain the fast power modulus algorithm, we will first make a few necessary knowledge 1. For modulo operations: 1 (a*b)%c= (a%c) * (b%c)%cThis is set up: And we are the foundation of the fast power of the core essence of the fast power I through the study of the discrete class, the essence of the fast power almost understand a bit, the feeling is very deep here, we have moved some hands and feet on the index, the core idea is toThe
power operation of large number is disassembled into the corresponding multiplication operation, using the above formula, always control the amount of data of our operations below the range of C, so that we can customer service the shortcomings of the naïve algorithm, we will calculate the amount of data compressed a large part, when the index is very large, this optimization is more significant, We're going to do an experiment with Python, and we'll see how efficient our optimization is.
1 fromTimeImport*2 defOrginal_algorithm (A,B,C):#a^b%c3Ans=14A=a%c#pretreatment to prevent the occurrence of a greater than C5 forIinchRange (b):6ans= (ans*a)%C7 returnans8 9 defQuick_algorithm (a,b,c):Tena=a%C OneAns=1 A #Here we do not need to consider b<0, because fractions do not have modulo operations - whileb!=0: - ifB&1: theans= (ans*a)%C -B>>=1 -A= (a*a)%C - returnans + -Time=clock () +A=eval (Input ("Base :")) AB=eval (Input ("Index:")) atC=eval (Input ("Mould:")) - Print("naïve algorithm result%d"%(Orginal_algorithm (a,b,c))) - Print("Naïve algorithm time:%f"% (clock ()-Time )) -Time=clock () - Print("fast Power algorithm result%d"%(Quick_algorithm (a,b,c))) - Print("Fast Power algorithm time:%f"% (clock ()-time))
Now that we know the power of the fast power modulo algorithm, we now look at the core principle:
Modulo exponentiation for any integer
A^b%c
For b we can split into binary form
B=b0+b1*2+b2*2^2+...+bn*2^n
Here our B0 corresponds to the first bit of the B binary.
Then our a^b operation can be disassembled into
a^b0*a^b1*2*1...*a^ (Bn*2^n)
For B, BITS is not 0 is 1, then for BX 0 of the item we calculate the result is 1 don't have to think about, what we really want is B's non 0 bits
So assuming that the 0 bits of B is removed, the formula we get is
a^ (bx*2^x) *...*a (bn*2^n)
Here we apply the formula we mentioned at the beginning, then our a^b%c operation can be transformed into
(a^ (bx*2^x)%c) *...* (a^ (bn*2^n)%c)
In that case, we're very close to the essence of the fast power.
(a^ (bx*2^x)%c) *...* (a^ (bn*2^n)%c)
We'll find the order
A1= (a^ (bx*2^x)%c)
...
An= (a^ (bn*2^n)%c)
In this case, an always is a (n-1) squared times (of course add in the modulus of uniform that), in turn, recursion
Now that our basic content has been learned, let's consider implementing it:
1 intFast_pow (intAintBintc)2 {3 intans=1;///Record Results4a=a%c;///preprocessing so that a is under the data range of C5 while(b!=0)6 {7 if(b&1) 8 {9Ans= (ans*a)%c;///if the bits of B is not 0, then our result is to participate in the computationTen } Oneb>>=1;///Binary shift operation, constantly traversing the bits of B AA= (a*a)%c;///keep doubling. - } - returnans; the}
Here are a few more notes:
1. A few binary operators & and >>.
& operations are typically used for binary take operations, such as the result of a number & 1, which is the last digit of the binary. Can also judge parity, x&1==0 for even, x&1==1 for odd.
The >> operation is simpler, the binary removes the last one, the shift operation, and the bits of the B is constantly traversed.
2. A= (a*a)%c This step of the role is to continue to double, in the case of the same remainder theorem, a*a==a^2, next to multiply, is a^2*a^2==a^4, then the same a^4 * A4 =a^8 ...? Did you do it?
A-->a^2-->a^4-->a^8-->a^16-->a^32. The index is 2^i Ah, look at the above example, a¹¹= a^ (2^0) * a^ (2^1) * a^ (2^3), These three items are not a perfect solution, the fast power is like this.
Fast power-take modulus algorithm