Reprint Please specify the original address: http://www.cnblogs.com/ygj0930/p/7019963.html
One: Requests Module introduction
Requests is a third-party HTTP library that makes it easy to implement Python's network connectivity, replacing the URLLIB2 module perfectly.
Two: Actual combat
Using requests to implement a directed crawler requires two steps: first get the source of the target Web page using requests, and then extract the information from it using requests and regular expressions.
1: Get the source code
There are two ways to get the source code:
Use Requests.get (URL). Text can be directly obtained without anti-crawler mechanism of the Web page source;
But for anti-crawler web pages, the above simple means is not complete access to the source of the Web page, can only read the contents of the robot in the Web page, prompting you to prevent crawling.
At this point, you can modify the HTTP header, and then in the Requests.get, put your head in the disguise hat, you can normally access the target Web page and get the source code.
First, we use the browser to open the target page, right-click Review Element (Firefox), check (Google)
Then, in the panel that opens, select the NetWork tab.
Finally, click one of the network requests in the line below the tab, open the details, drag to the bottom of the Requestheader item below, find the user-agent, and copy it.
This user-agent is what we need to disguise, we use this as a crawler to simulate a browser to access the page request, thus bypassing the Web page of the anti-crawler protocol.
#Coding:utf8ImportRequests#the disguise header copied from the browser .head={'user-agent':'mozilla/5.0 (Windows NT 6.3; Win64; x64) applewebkit/537.36 (khtml, like Gecko) chrome/56.0.2924.87 safari/537.36'}#making a request with a masquerade headerHtml=requests.get ("https://www.bilibili.com/", headers=head)#specify encoding format to avoid Chinese garbled charactershtml.encoding='UTF8'#get web page source codePrintHtml.text
Python directed crawler combat