Before learning the crawler, occasionally encountered some problems are some sites need to log in to crawl content, and some sites will identify whether it is a browser-issued requests.
First, the acquisition of headers
Take the homepage of the blog Park as an example: http://www.cnblogs.com/
Open the Web page and press the F12 key as shown:
Click on the network in the tabs below, as follows:
Then click on the location shown below:
Find the label shown in the red underline position and click on it to see the headers information you need in the display on the right.
Generally only need to add user-agent this information is sufficient, headers is also a dictionary type;
' mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/45.0.2454.101 safari/537.36" user-agent '
Second, data acquisition
Take the blog login interface as an example: Http://passport.cnblogs.com/user/signin?ReturnUrl=http%3A%2F%2Fwww.cnblogs.com%2F
Press the F12 key as shown in the following:
Click Network, then enter your user name and password, click Login to see as shown:
Blog Park Login Data information:
Data={ input1:"*******", input2:"******* ", remember:"false"}
Take the E-Donkey download website for example: http://secure.verycd.com/signin?error_code=emptyInput&continue=http://www.verycd.com/
The data information is in the From Data tab:
data={username: **** " **** " continue : " http://www.verycd.com/ FK: Span style= "color: #800000;" > " " 1, Login_submit: login " "
The data information for each login site is not necessarily the same, it needs to be entered on the page.
All right, we're here today. An example is introduced tomorrow: How to crawl the embarrassing jokes.
Reproduced when the original author Source: maple2cat| Python crawler Learning: Four, headers and data acquisition
Python crawler Learning: Four, headers and data acquisition