Urllib is a Python get URL (Uniform Resource locators, unified resource addressable), we can use it to crawl remote data to save OH
1. Basic methods
urllib.request.
urlopen
(
url,
data=none, [
timeout, ]
*,
cafile=none,
capath= None,
cadefault=false,
context=none)
-url: The URL that needs to be opened
-Data submitted by Data:post
-Timeout: Set the access time-out for a website
Get the page directly with the Urlopen () of the Urllib.request module, the data format of page is bytes type, need decode () decode, convert to STR type.
from Import = Request.urlopen (R'http://python.org/'# < Http.client.HTTPResponse object at 0x00000000048bc908> httpresponse type page == Page.decode (' Utf-8 ')
Urlopen provides methods for returning objects:
-Read (), ReadLine (), ReadLines (), Fileno (), close (): operation on HttpResponse type data
-INFO (): Returns the Httpmessage object that represents the header information returned by the remote server
-GetCode (): Returns the HTTP status code. If it is an HTTP request, 200 request completed successfully; 404 URL not Found
-Geturl (): Returns the requested URL
2. Use the request
urllib.request.
Request
(
URL, Data=none, headers={}, Method=none)
Use request () to wrap the requests, and then get the page through Urlopen ().
URL = r'Http://www.lagou.com/zhaopin/Python/?labelWords=label'Headers= { 'user-agent': R'mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko)'R'chrome/45.0.2454.85 safari/537.36 115browser/6.0.3', 'Referer': R'Http://www.lagou.com/zhaopin/Python/?labelWords=label', 'Connection':'keep-alive'}req= Request. Request (URL, headers=headers) Page=Request.urlopen (req). Read () page= Page.decode ('Utf-8')
Data to wrap the head:
-User-agent: This head can carry the following information: Browser name and version number, operating system name and version number, default language
-Referer: Can be used to prevent hotlinking, there are some Web site image display source http://***.com, is to check Referer to identify
-Connection: Indicates the status of the connection and logs the session status.
3.Post data
urllib.request.
urlopen
(
url,
data=none, [
timeout, ]
*,
cafile=none,
capath= None,
cadefault=false,
context=none)
The data parameter of Urlopen () defaults to none, and when the data parameter is not empty, Urlopen () is submitted as post.
fromUrllibImportrequest, parseURL= R'Http://www.lagou.com/jobs/positionAjax.json?'Headers= { 'user-agent': R'mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko)'R'chrome/45.0.2454.85 safari/537.36 115browser/6.0.3', 'Referer': R'Http://www.lagou.com/zhaopin/Python/?labelWords=label', 'Connection':'keep-alive'}data= { ' First':'true', 'PN': 1, 'KD':'Python'}data= Parse.urlencode (data). Encode ('Utf-8') Req= Request. Request (URL, headers=headers, data=data) page=Request.urlopen (req). Read () page= Page.decode ('Utf-8')
urllib.parse.urlencode
(
query, Doseq=false, safe= ", Encoding=none, Errors=none)
The main function of UrlEncode () is to enclose the URL with the data to be submitted.
data = { 'first''true', ' pn': 1, 'kd ' Python'= parse.urlencode (data). Encode ('utf-8')
After the UrlEncode () converted data is First=true?pn=1?kd=python, the last URL submitted is
Http://www.lagou.com/jobs/positionAjax.json?first=true?pn=1?kd=Python
The data for the post must be bytes or iterable of bytes, not str, so encode () encoding is required
page = Request.urlopen (req, data=data). Read ()
Of course, data can also be encapsulated in the Urlopen () parameter
4. Exception Handling
defget_page (URL): Headers= { 'user-agent': R'mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko)'R'chrome/45.0.2454.85 safari/537.36 115browser/6.0.3', 'Referer': R'Http://www.lagou.com/zhaopin/Python/?labelWords=label', 'Connection':'keep-alive'} data= { ' First':'true', 'PN': 1, 'KD':'Python'} data= Parse.urlencode (data). Encode ('Utf-8') Req= Request. Request (URL, headers=headers)Try: Page= Request.urlopen (req, data=data). Read () page= Page.decode ('Utf-8') excepterror. Httperror as E:Print(E.code ())Print(E.read (). Decode ('Utf-8')) returnPage
5, the use of agents
urllib.request.
ProxyHandler
(
proxies=none)
When a site that needs to crawl has access restrictions set, a proxy is needed to fetch the data.
data = { ' First':'true', 'PN': 1, 'KD':'Python'}proxy= Request. Proxyhandler ({'http':'5.22.195.215:80'})#Set proxyOpener = Request.build_opener (proxy)#Mount openerRequest.install_opener (opener)#Installing openerdata = Parse.urlencode (data). Encode ('Utf-8') Page=opener.open (URL, data). Read () page= Page.decode ('Utf-8')returnPage
5. Practice
#-*-coding:utf-8-*-Importurllib.requestImportReImportOstargetdir= R"D:\python\test\spider\img\douban" #File Save pathdefdestfile (path):if notOs.path.isdir (TargetDir): Os.mkdir (targetDir) POS= Path.rindex ('/') T= Os.path.join (TargetDir, path[pos+1:]) returnTif __name__=="__main__":#Program Run EntryWeburl ="http://www.douban.com/"webheaders= { 'Connection':'keep-alive', 'Accept':'text/html, Application/xhtml+xml, */*', 'Accept-language':'en-us,en;q=0.8,zh-hans-cn;q=0.5,zh-hans;q=0.3', 'user-agent':'mozilla/5.0 (Windows NT 6.3; WOW64; trident/7.0; rv:11.0) Like Gecko', #' accept-encoding ': ' gzip, deflate ', 'Host':'www.douban.com', 'DNT':'1'} req= Urllib.request.Request (Url=weburl, Headers=webheaders)#Construct request HeaderWebpage = Urllib.request.urlopen (req)#Send Request HeaderContentbytes =webpage.read () contentbytes= Contentbytes.decode ('UTF-8') forLink, tinchSet (Re.findall (r'(https:[^\s]*? ( jpg|png|gif))', str (contentbytes)):#Regular Expressions Find all the pictures Print(link)Try: Urllib.request.urlretrieve (link, destfile)#Download Image except: Print('failed')#exception throws
The execution process is as follows:
Open the corresponding folder on the computer, and then look at the picture, here is only a part of Oh!!.
python-Crawler's Urllib module