Problem: When using Scrapy to crawl a single stock of Baidu stock information, encountered 403 Access denied error, this should be triggered by the reverse crawl mechanism.
Solution: By trying to find the Baidu stock (http://gupiao.baidu.com) reverse climbing mechanism is to detect user-agent, so this can be done by using random user-agent to crawl.
First, this is a list of commonly used user-agent found on the web, put it in the spider directory of the crawler file class:
User_agent_list = [\] mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.1 (khtml, like Gecko) chrome/22.0.1207.1 safari/537.1 "\" mozilla/5.0 (X11; Cros i686 2268.111.0) applewebkit/536.11 (khtml, like Gecko) chrome/20.0.1132.57 safari/536.11 ", \" mozilla/5.0 (Wi Ndows NT 6.1; WOW64) applewebkit/536.6 (khtml, like Gecko) chrome/20.0.1092.0 safari/536.6 ", \" mozilla/5.0 (Windows NT 6.2) APPL ewebkit/536.6 (khtml, like Gecko) chrome/20.0.1090.0 safari/536.6 ", \" mozilla/5.0 (Windows NT 6.2; WOW64) applewebkit/537.1 (khtml, like Gecko) chrome/19.77.34.5 safari/537.1 ", \" mozilla/5.0 (X11; Linux x86_64) applewebkit/536.5 (khtml, like Gecko) chrome/19.0.1084.9 safari/536.5 ", \" mozilla/5.0 (Windows NT 6. 0) applewebkit/536.5 (khtml, like Gecko) chrome/19.0.1084.36 safari/536.5 ", \" mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/536.3 (khtml, like Gecko) chrome/19.0.1063.0 safari/536.3 ", \" mozilla/5.0 (Windows NT 5.1) Appl Ewebkit/536.3 (khtml, like Gecko) chrome/19.0.1063.0 safari/536.3 ", \" mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) applewebkit/536.3 (khtml, like Gecko) chrome/19.0.1063.0 safari/536.3 ", \" mozilla/5.0 (Wind OWS NT 6.2) applewebkit/536.3 (khtml, like Gecko) chrome/19.0.1062.0 safari/536.3 ", \" mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/536.3 (khtml, like Gecko) chrome/19.0.1062.0 safari/536.3 ", \" mozilla/5.0 (Windows NT 6.2) App lewebkit/536.3 (khtml, like Gecko) chrome/19.0.1061.1 safari/536.3 ", \" mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/536.3 (khtml, like Gecko) chrome/19.0.1061.1 safari/536.3 ", \" mozilla/5.0 (Windows NT 6.1) APPL ewebkit/536.3 (khtml, like Gecko) chrome/19.0.1061.1 safari/536.3 ", \" mozilla/5.0 (Windows NT 6.2) applewebkit/536 .3 (khtml, like Gecko) chrome/19.0.1061.0 safari/536.3 ", \" mozilla/5.0 (X11; Linux x86_64) applewebkit/535.24 (khtml, like Gecko) chrome/19.0.1055.1 safari/535.24 ", \" Mozilla/5.0 (Windows NT 6.2; WOW64) applewebkit/535.24 (khtml, like Gecko) chrome/19.0.1055.1 safari/535.24 "]
Then, randomly extracting user-agent and adding it to headers, this code should be placed in the page request method:
UA = Random.choice (self.user_agent_list) #随机抽取User-agent
headers = {
' accept-encoding ': ' gzip, deflate, SDCH, Br ',
' accept-language ': ' zh-cn,zh;q=0.8 ',
' Connection ': ' keep-alive ',
' Referer ': ' https:// gupiao.baidu.com/',
' user-agent ': UA
} #构造请求头
Finally, the access request is generated through the generator:
Yield scrapy. Request (Url,callback=self.parse,headers=headers)
Thus, we can realize using stochastic user-agent to evade the anti-reptile mechanism.