Python crawler: Know Urllib/urllib2 and requests

Source: Internet
Author: User
Tags soap response code set time urlencode

First of all, my reptile environment is based on py2.x, why use this version, because py2.xVersions are supported by many, and are typically used py2.xEnvironment, Basic in py3.xIt's not too much of a problem, okay, go to the chase!

Urllib and Urllib2

urllibWith urllib 2 is Python built-in, to achieve the request to the Http urllib2 Main, urllib supplemented.

Building a request and response model

import urllib2strUrl = "http://www.baidu.com"response = urllib2.urlopen(strUrl)print response.read()
Get: <div class= "S_tab" id= "S_tab" > <b> Web </b><a href= "http://news.baidu.com/ns?cl=2&rn=20 &tn=news&word= "wdfield=" word "onmousedown=" return C ({' FM ': ' tab ', ' tab ': ' News ') ' > news </a><a href = "HTTP://TIEBA.BAIDU.COM/F?KW=&AMP;FR=WWWT" wdfield= "kw" onmousedown= "return C ({' FM ': ' tab ', ' tab ': ' Tieba '})" > Post bar </a><a href= "HTTP://ZHIDAO.BAIDU.COM/Q?CT=17&AMP;PN=0&AMP;TN=IKASLIST&AMP;RN=10&AMP;WORD=&AMP;FR=WWWT" wdfield= "word" onmousedown= "return C ({' FM ': ' tab ', ' tab ': ' Zhidao '})" > Know </a><a href= "/http music.baidu.com/search?fr=ps&ie=utf-8&key= "wdfield=" key "onmousedown=" return C ({' FM ': ' tab ', ' tab ': ' Music ') }) "> Music </a><a href=" http://image.baidu.com/search/index?tn=baiduimage&ps=1&ct=201326592& lm=-1&cl=2&nc=1&ie=utf-8&word= "wdfield=" word "onmousedown=" return C ({' FM ': ' tab ', ' tab ': ' Pic '}) " > Pictures </a><a href= "Http://v.baidu.com/v?ct=301989888&rn=20&pn=0&db=0&s=25&ie=utf-8&word= "wdfield=" word "onmousedown=" return C ({' FM ': ' tab ', ' tab ': ' Video '}) ' > Video </a><a href= ' http://map.baidu.com/m?word=&fr=ps01000 "wdfield=" word "onmousedown=" return C ({' FM ': ' tab ', ' tab ': ' Map '}) ' >  Map </a><a href= "http://wenku.baidu.com/search?word=&lm=0&od=0&ie=utf-8" wdfield= "word" Onmousedown= "return C ({' FM ': ' tab ', ' tab ': ' Wenku ')" > Library </a><a href= "//www.baidu.com/more/" onmousedown= "Return C ({' FM ': ' tab ', ' tab ': ' More '})" > More?</a></div>

This will get the entire page content.
Description
urlopen(strUrl,data,timeout)

    • The first parameter URL must be passed, the second parameter data is to access the URL to be transferred, the third timeout is the set time-out period, the following two parameters are not necessarily passed.

Get and post transmit data
Post and get transfer data are two more commonly used data transfer methods, generally only need to master these two ways to do.

Get mode to transfer data

import urllib2import urllibvalues = {}values[‘username‘] = ‘136xxxx0839‘values[‘password‘] = ‘123xxx‘data = urllib.urlencode(values)#这里注意转换格式url = ‘https://accounts.douban.com/login?alias=&redir=https%3A%2F%2Fwww.douban.com%2F&source=index_nav&error=1001‘getUrl = url+‘?‘+datarequest = urllib2.Request(getUrl)response = urllib2.urlopen(request)# print response.read()print getUrl得到:https://accounts.douban.com/login?alias=&redir=https%3A%2F%2Fwww.douban.com%2F&source=index_nav&error=1001?username=136xxxx0839&password=123xxx

Post data transfer Mode

values = {}values[‘username‘] = ‘136xxxx0839‘values[‘password‘] = ‘123xxx‘data = urllib.urlencode(values)url = ‘https://accounts.douban.com/login?alias=&redir=https%3A%2F%2Fwww.douban.com%2F&source=index_nav&error=1001‘request = urllib2.Request(url,data)response = urllib2.urlopen(request)print response.read()

Two different request way differences:
postWith the request way the data is transferred, note urllib2.Request(url,data) that the data

Note the headers of the processing request
Most of the time our server will verify that the request is from the browser, so we need to disguise the request as a browser to request the server. Generally, when making a request, it is best to disguise it as a browser to prevent errors such as denial of access, which is also a strategy for anti-crawlers.

user_agent = {‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.4295.400 QQBrowser/9.7.12661.400‘}header = {‘User-Agent‘:user_agent}url = ‘http://www.qq.com/‘request = urllib2.Request(url,headers=header)response = urllib2.urlopen(request)print response.read().decode(‘gbk‘)#这里注意一下需要对读取的网页内容进行转码,先要查看一下网页的chatset是什么格式.

Open on the browser www.qq.com and press F12 to view User-Agent :

User-agent: Some servers or proxies will use this value to determine whether a request is made by a browser
Content-type: When using the REST interface, the server checks the value to determine how the content in the HTTP Body should be parsed.
Application/xml: Used in XML RPC, such as Restful/soap call
Application/json: Used in JSON RPC calls
Application/x-www-form-urlencoded: Used when a Web form is submitted by the browser
Content-type setting errors cause server denial of service when using RESTful or SOAP services provided by the server

Requests

requestsPython is the most commonly used http request library, is also extremely simple. When you use it, you first need to requests install it and use Pycharm for one-click installation.

1. Response and encoding
import requestsurl = ‘http://www.baidu.com‘r = requests.get(url)print type(r)print r.status_codeprint r.encoding#print r.contentprint r.cookies得到:<class ‘requests.models.Response‘>200ISO-8859-1<RequestsCookieJar[<Cookie BDORZ=27315 for .baidu.com/>]>
2.Get Request method
values = {‘user‘:‘aaa‘,‘id‘:‘123‘}url = ‘http://www.baidu.com‘r = requests.get(url,values)print r.url得到:http://www.baidu.com/?user=aaa&id=123
3.Post Request method
values = {‘user‘:‘aaa‘,‘id‘:‘123‘}url = ‘http://www.baidu.com‘r = requests.post(url,values)print r.url#print r.text得到:http://www.baidu.com/
4. Request Header Headers Processing
user_agent = {‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.4295.400 QQBrowser/9.7.12661.400‘}header = {‘User-Agent‘:user_agent}url = ‘http://www.baidu.com/‘r = requests.get(url,headers=header)print r.content
5. Response Code Code and response Header headers processing
url = ‘http://www.baidu.com‘r = requests.get(url)if r.status_code == requests.codes.ok:    print r.status_code    print r.headers    print r.headers.get(‘content-type‘)#推荐用这种get方式获取头部字段else:    r.raise_for_status()得到:200{‘Content-Encoding‘: ‘gzip‘, ‘Transfer-Encoding‘: ‘chunked‘, ‘Set-Cookie‘: ‘BDORZ=27315; max-age=86400; domain=.baidu.com; path=/‘, ‘Server‘: ‘bfe/1.0.8.18‘, ‘Last-Modified‘: ‘Mon, 23 Jan 2017 13:27:57 GMT‘, ‘Connection‘: ‘Keep-Alive‘, ‘Pragma‘: ‘no-cache‘, ‘Cache-Control‘: ‘private, no-cache, no-store, proxy-revalidate, no-transform‘, ‘Date‘: ‘Wed, 17 Jan 2018 07:21:21 GMT‘, ‘Content-Type‘: ‘text/html‘}text/html
6.cookie processing
url = ‘https://www.zhihu.com/‘r = requests.get(url)print r.cookiesprint r.cookies.keys()得到:<RequestsCookieJar[<Cookie aliyungf_tc=AQAAACYMglZy2QsAEnaG2yYR0vrtlxfz for www.zhihu.com/>]>[‘aliyungf_tc‘]
7 Redirects with historical messages

Handle redirection just needs to set the allow_redirects field, set to allow_redirectsy True allow redirection, set to False prohibit redirection

r = requests.get(url,allow_redirects = True)print r.urlprint r.status_codeprint r.history得到:http://www.baidu.com/200[]
8. Timeout settings

The timeout option is set by parameter timeout

url = ‘http://www.baidu.com‘r = requests.get(url,timeout = 2)
9. Proxy settings
proxis = {    ‘http‘:‘http://www.baidu.com‘,    ‘http‘:‘http://www.qq.com‘,    ‘http‘:‘http://www.sohu.com‘,}url = ‘http://www.baidu.com‘r = requests.get(url,proxies = proxis)

Python crawler: Know Urllib/urllib2 and requests

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.