1) to test the security interval.
2) to develop a control strategy.
If we use" 1 times/More than 3 seconds "frequency to visit the site is safe (of course, the smaller the frequency is safer), according to our experience, Most sites that use "more than 3 seconds" of time interval access are not the blocking policies that trigger the site (our recommended interval value: 5 seconds).
implementation in the program: for the same IP, before downloading the page to determine whether the time between the last access is more than 5 seconds, if there is no sleep Until more than 5 seconds to make the next request.
3) speed up with multi-threaded +http agents.
If we use the 5-second interval means we can only complete 12 accesses per minute (HTTP request), A single day can only complete less than 20,000 downloads (3600 * /5 = 17280). This speed is acceptable for small-scale websites, but it is too slow for sites with millions or tens of thousands of pages. To the general public, for example, 15 million pages, according to the speed of 2 years and 4 months to complete, it is terrible.
and we collect a mass review web only takes about 15 days, how do we do it? is through multi-threaded + HTTP proxy. The worm who used the HTTP proxy knows that when we make a request through a high-anonymous HTTP proxy, the target site can only detect the IP of the HTTP proxy, not the source IP, or know that you are using a proxy, and that the request is from another visitor to the target site (not your concern). Assuming that we have 100 high-stability HTTP proxy, still at the same IP interval of 5 seconds, theoretically, can achieve a daily download volume is 1.7 million!
Implementation in the program: Open 100 threads, each thread is fixed to use an HTTP proxy, each thread handles different acquisition tasks, each thread controls the speed of access to the site. Because the data extraction is the pure computation operation multithreading does not accelerate, 12 cores CPU environment, the actual daily acquisition amount can reach 1 million about (each page extracts 20 fields around).
How to effectively prevent the website from being shielded by the IP when collecting