How to Set proxy IP addresses for Python crawlers (crawler skills) and python Crawlers
When learning Python crawlers, we often encounter the anti-crawling technology adopted by the website to be crawled. High-Intensity and efficient crawling of webpage information often puts huge pressure on the website server, therefore, if the same IP address crawls the same web page repeatedly, it is likely to be blocked. Here is a crawler technique to set the proxy IP address.
(1) configure the environment
- Install the requests Library
- Install bs4 Library
- Install lxml Library
(2) code Display
# IP address from domestic zookeeper proxy IP site: http://www.xicidaili.com/nn/# only crawling home page IP address is enough to use from bs4 import BeautifulSoupimport requestsimport randomdef get_ip_list (url, headers): web_data = requests. get (url, headers = headers) soup = BeautifulSoup (web_data.text, 'lxml') ips = soup. find_all ('tr ') ip_list = [] for I in range (1, len (ips): ip_info = ips [I] tds = ip_info.find_all ('td ') ip_list.append (tds [1]. text + ':' + tds [2]. t Ext) return ip_listdef get_random_ip (ip_list): proxy_list = [] for ip in ip_list: proxy_list.append ('HTTP: // '+ ip) proxy_ip = random. choice (proxy_list) proxies = {'HTTP ': proxy_ip} return proxiesif _ name _ =' _ main _ ': url = 'HTTP: // www.xicidaili.com/nn/'headers = {'user-agent': 'mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.3 6'} ip_list = get_ip_list (url, headers = headers) proxies = get_random_ip (ip_list) print (proxies) function get_ip_list (url, headers) Incoming url and headers, finally, an IP list is returned. The list elements are similar to the 42.84.226.65: 8888 format. This list includes all the IP addresses and ports on the home page of the proxy IP website in China.
The get_random_ip (ip_list) function is used to input the list of the first function and return a random proxies. This proxies can be passed into the get method of requests, in this way, different IP addresses can be used for each operation to access the crawled website, effectively avoiding the risk of real IP address being blocked. The format of proxies is a dictionary.:{‘http': ‘http://42.84.226.65:8888‘}
.
(3) Use of proxy IP addresses
Run the above code to get a random proxies and pass it directly to the get method of requests.
web_data = requests.get(url, headers=headers, proxies=proxies)
Summary
The above is a small series of methods (crawler skills) for setting proxy IP addresses for Python crawlers. I hope to help you. If you have any questions, please leave a message, the editor will reply to you in time!