I recently encountered some problems during the capture of soft exam questions for the purpose of capturing the online exam. the following article mainly describes how to use python to crawl the ip address of the soft exam questions for automatic proxy, this article is very detailed. let's take a look at it. I recently encountered some problems during the capture of soft exam questions for the purpose of capturing the online exam. the following article mainly describes how to use python to crawl the ip address of the soft exam questions for automatic proxy, this article is very detailed. let's take a look at it.
Preface
Recently, I have a software professional grade examination, hereinafter referred to as the soft exam. in order to better review and prepare for the examination, I plan to capture the soft exam questions on www.rkpass.cn.
First, let's talk about how I crawled the soft exam question (keng) (shi ). Now I can automatically capture all the questions of a module, such:
2. The second method is to break through the anti-crawler mechanism to continue high-frequency crawling by setting proxy IP addresses and other means. However, multiple stable proxy IP addresses are required.
If there are not many books, go to the code directly:
# The IP address is taken from the domestic strongSwan proxy IP website: www.xicidaili.com/nn/# crawling the IP address of the home page is enough. generally, from bs4 import BeautifulSoupimport requestsimport random # Getting ipdef get_ip_list ): web_data = requests. get (url, headers = headers) soup = BeautifulSoup (web_data.text) ips = soup. find_all ('tr ') ip_list = [] for I in range (1, len (ips): ip_info = ips [I] tds = ip_info.find_all ('TD ') ip_list.append (tds [1]. text + ':' + tds [2]. text) return ip_list # obtain an ipdef get_random_ip (ip_list): proxy_list = [] for Ip in ip_list: proxy_list.append ('http: // '+ ip) randomly from the captured ip address) proxy_ip = random. choice (proxy_list) proxies = {'http ': proxy_ip} return proxies # domestic Master proxy IP address url =' http://www.xicidaili.com/nn/ '# Request header headers = {'user-Agent': 'mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) chrome/53.0.2785.143 Safari/537.36 '} # The counter is cyclically crawled based on the counter. ipnum = 0 # create an array, store the captured ip address to the array ip_array = [] while num <1537: num + = 1 ip_list = get_ip_list (url + str (num), headers = headers) ip_array.append (ip_list) for ip in ip_array: print (ip) # Create a random number and obtain an ip randomly # proxies = get_random_ip (ip_list) # print (proxies)
Running result:
In this way, when crawling requests, setting the request ip address as an automatic ip address can effectively escape the simple anti-crawler method of blocking the fixed ip address.
Certificate -------------------------------------------------------------------------------------------------------------------------------------
For the stability of the website, the crawling speed is still under control. after all, it is not easy for webmasters. In this article, only 17 IPs are captured.
Summary
The above is the details of the ip automatic proxy instance in the python crawling technology. For more information, see other related articles in the first PHP community!