Relatives time limit: 1 second memory limit: 64 m
Problemdescription
Given n, a positive integer, how many positive integers less than or equal to N are relatively prime to n? Two integers A and B are relatively prime if thereare no integers x> 1, Y> 0, z> 0 such that a = xy and B = xz Z.
Input
There are several test cases. For each test case, standard input contains a line with n <= 1,000,000,000. A line containing 0 follows the last case.
Output
For each test case there shocould be single line of output answering the question posed above.
Sample Input
7
12
0
Sample output
6
4
At the beginning, the idea was to use a linear screening prime number to strip a prime number table and use the formula to find the Euler's function. I didn't expect the memory reference to be invalid because the number of prime numbers is not too large and the opened array is too small, later, I increased the array, or TLE. Then I saw my blog and started to change to the prime factor decomposition theorem. I am splitting the prime factor and edge score, but it is still TLE because of improper details, finally, I took a look at the details of the processing, rewritten the code again, and finally AC.
#include <stdio.h>int ans;void Eu(int n){for(int i = 2; i * i <= n;++i){if(!(n%i)){ans *= i - 1;n /= i;while(!(n%i)){ans *= i;n/=i; } }}if(n > 1){ans *= n -1;}return ;}int main(){int n;while(scanf("%d",&n),n){ans = 1;Eu(n);printf("%d\n",ans);}return 0;}
The unprocessed part is: I <= n. And I * I <= N, the biggest advantage is that it can save a lot of time in processing the mass number, for example, the loop above 17 only needs to be executed five times, the following execution is performed 17 times. This can be reflected in each resolution factor. When there are a lot of prime factors, the difference in time is reflected. They all use the prime factor decomposition theorem, but a little detail processing leads to a big difference in time.
void Eu(int n){for(int i = 2;n-1;++i){if(!(n%i)){ans *= i - 1;n /= i;while(!(n%i)){ans *= i;n/=i; } }}return ;}
The learning result is that you are familiar with the linear method of one side of the prime number table:
// Linear filtering of prime numbers (INT I = 2; I <4000; ++ I) {If (! Not_prime [I]) {Prime [p_len ++] = I ;}for (Int J = 0; j <p_len; ++ J) {not_prime [I * prime [J] = 1; if (! (I % prime [J]) {break ;}}}
This is the first time I saw this prime number table for less than N (here it is 40000). It was actually seen in the blog of the great god of ACM on the north exchange during the summer training last year, but I didn't understand what was going on at the time. I wrote it once and didn't understand what was going on. When I wrote it, I wrote it myself by referring to other people's code, but I never figured out why it was right. I thought about why it was a prime number when I went back to school on the train yesterday. In fact, I can use the unique factorization theorem of prime factor, it can be proved that the sieve method is linear with the sieve method of eratosney, and the number of screened data is only screened once.
The unique factorization theorem of prime factor indicates that any number = a prime [J] * a number I (which can be a prime number or a non-prime number ), when I % prime [J] = 0, you do not need to calculate the value after
The prime [J + 1] * I can be converted to the product of prime [J] * (greater than the number of I;