First, disguise browser
For some sites that need to be logged in, the response is not made if the request is not from the browser. So, we need to disguise the requests made by the crawler as a regular browser.
Implementation: custom page request header.
Ii. using Fiddler to view request and response headers
Open the tool fiddler, and then the browser to access "https://www.douban.com/", in the Fiddler left-hand access to the record, find "the" "" https www.douban.com "this article, Click to view the specific contents of its corresponding request and response header:
Third, visit the watercress
We customize the request header with the same content as requested headers:
"' Disguise browser for some sites that need to log on, if not the request from the browser, you will not get a response. So, we need to disguise the requests made by the crawler as a regular browser. Implementation: custom page request header. "' #实例二: Still crawl the watercress, using the Disguise browser way import urllib.request# definition save function def saveFile (data): path =" E:\\projects\\spider\\02_ Douban.out " f = open (path, ' WB ') f.write (data) f.close () #网址url =" https://www.douban.com/"headers = {' User-agent ': ' mozilla/5.0 (Windows NT 10.0; WOW64) applewebkit/537.36 (khtml, like Gecko) ' chrome/51.0.2704.63 safari/537.36 '}req = Urllib.request.Request ( url=url,headers=headers) res = Urllib.request.urlopen (req) data = Res.read () #也可以把爬取的内容保存到文件中saveFile (data) data = Data.decode (' Utf-8 ') #打印抓取的内容print (data) #打印爬取网页的各类信息print (type (res)), print (Res.geturl ()) print (Res.info ()) Print ( Res.getcode ())
Iv. result of output (intercept part)
Result file contents
GitHub Code
Python3 Crawler Example (ii)--Camouflage browser