Python crawler practice (iii) -------- sogou article (IP proxy pool and user proxy pool settings ---- scrapy ),
In learning the scrapy crawler framework, it will certainly involve setting the IP proxy pool and User-Agent pool to avoid anti-crawling of websites.
In the past two days, I watched a video about sogou's article crawling. I talked about ip proxy pool and user proxy pool. Here I will take a summary note based on my knowledge, for future reference.
Notes
I. Anti-crawler mechanism handling ideas:
Ii. Scatter knowledge:
Practical operations
Related code has been successfully debugged ----
Target URL: http://weixin.sogou.com/weixin? Type = 2 & query = python & ie = utf8
Implementation: captures the title, title link, and description of a python article. As shown in.
Data: I have not saved the data. In practice, it is mainly for learning.IP address and user proxy poolIs recommended for an open-source project.
Figure 1
The code for setting the IP address and user proxy pool is posted here. complete code can be found at my github: https://github.com/pujinxiao/weixin
1. main middlewares. py code
1 #-*-coding: UTF-8-*-2 import random 3 from scrapy. downloadermiddlewares. httpproxy import HttpProxyMiddleware # proxy ip address, which is a fixed import 4 from scrapy. downloadermiddlewares. useragent import UserAgentMiddleware # proxy UA, fixed import 5 class IPPOOLS (HttpProxyMiddleware): 6 def _ init _ (self, ip = ''): 7''' initialization ''' 8 self. ip = ip 9 def process_request (self, request, spider): 10''' use the proxy ip address. Randomly select ''' 11 ip = random. choice (self. ip_pools) # randomly select an ip12 print 'current IP address is '+ ip ['IP'] 13 try: 14 request. meta ["proxy"] = "http: //" + ip ['IP'] 15 TB t Exception, e: 16 print e17 pass18 ip_pools = [19 {'IP ': '2017. 65.238.166: 80'}, 20 # {'IP': ''}, 21] 22 class UAPOOLS (UserAgentMiddleware): 23 def _ init _ (self, user_agent = ''): 24 self. user_agent = user_agent25 def process_request (self, request, spider): 26'' use the proxy UA and select ''' 27 ua = random at random. choice (self. user_agent_pools) 28 print 'current user-agent used is '+ ua29 try: 30 request. headers. setdefault ('user-agent', ua) 31 failed t Exception, e: 32 print e33 pass34 user_agent_pools = [35 'mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.20.2.0 Safari/100', 36' Mozilla/536.3 (Windows NT 5.0; WOW64) AppleWebKit/6.1 (KHTML, like Gecko) Chrome/19.0.20.3.0 Safari/100 ', 37 'mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/123', 38]
2. setting. py main code
1 DOWNLOADER_MIDDLEWARES = {2 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':123,3 'weixin.middlewares.IPPOOLS':124,4 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware' : 125,5 'weixin.middlewares.UAPOOLS':1266 }
Author: Jin Xiao
Source: http://www.cnblogs.com/jinxiao-pu/p/6665180.html
The copyright of this article is shared by the author and the blog. You are welcome to repost this article, but you must keep this statement without the author's consent and provide a connection to the original article on the article page.