Python's example of using requests library to write crawlers

Source: Internet
Author: User
Basic GET Request:

#-*-coding:utf-8-*-import requestsurl = ' www.baidu.com ' r = Requests.get (URL) print R.text

With parameter GET request:

#-*-coding:utf-8-*-import requestsurl = ' http://www.baidu.com ' payload = {' Key1 ': ' value1 ', ' key2 ': ' value2 '}r = Requests . Get (URL, params=payload) print R.text

The POST request simulates the login and some methods of returning the object:

#-*-coding:utf-8-*-import requestsurl1 = ' www.exanple.com/login ' #登陆地址url2 = " Www.example.com/main "#需要登陆才能访问的地址data ={" user ":" User "," Password ":" Pass "}headers = {" Accept ":" Text/html, Application/xhtml+xml,application/xml; "," accept-encoding ":" gzip "," Accept-language ":" zh-cn,zh;q=0.8 "," Referer ":" www.example.com/"," user-agent ":" mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/42.0.2311.90 safari/537.36 "}res1 = Requests.post (Url1, DA Ta=data, headers=headers) Res2 = Requests.get (Url2, cookies=res1.cookies, headers=headers) print Res2.conten 

t# Get binary response content print res2.raw# get the original response content, need to Stream=trueprint res2.raw.read () print type (res2.text) #返回解码成unicode的内容print RES2.URLPRINT res2.history# Tracking redirect Print res2.cookiesprint res2.cookies[' example_cookie_name ']print res2.headersprint res2.headers[' Content-type ']print res2.headers.get (' Content-type ') print res2.json# speak back content encoded as Jsonprint res2.encoding #返回内容编码print res2.status_code# returns the HTTP status code print res2.raise_for_status () #返回错误状态码

Use the session () object notation (Prepared requests):

#-*-coding:utf-8-*-import REQUESTSS = requests. Session () url1 = ' www.exanple.com/login ' #登陆地址url2 = "Www.example.com/main" #需要登陆才能访问的地址data ={"user": "User", "Password ":" Pass "}headers = {" Accept ":" Text/html,application/xhtml+xml,application/xml; ",            " accept-encoding ":" gzip ",            "Accept-language": "zh-cn,zh;q=0.8",            "Referer": "http://www.example.com/",            "user-agent": "mozilla/ 5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/42.0.2311.90 safari/537.36 "            }prepped1 = requests. Request (' POST ', URL1,    data=data,    headers=headers). Prepare () S.send (prepped1) "can also write res = requests. Request (' POST ', url1,data=data,headers=headers) prepared = S.prepare_request (res) # do something with prepped.body# do Something with Prepped.headerss.send (prepared) "Prepare2 = requests. Request (' POST ', Url2,    headers=headers). Prepare () Res2 = S.send (prepare2) Print res2.content

Another way to do this:

#-*-coding:utf-8-*-import REQUESTSS = requests. Session () url1 = ' www.exanple.com/login ' #登陆地址url2 = "Www.example.com/main" #需要登陆才能访问的页面地址data ={"user": "User", " Password ":" Pass "}headers = {" Accept ":" Text/html,application/xhtml+xml,application/xml; ",            " accept-encoding ":" Gzip ",            " Accept-language ":" zh-cn,zh;q=0.8 ",            " Referer ":" http://www.example.com/",            " user-agent ":" mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/42.0.2311.90 safari/537.36 "            }res1 = S.post (URL1, Data=data) Res2 = S.post (url2) print (resp2.content) Sessionapi Some other request methods >>> r = Requests.put ("Http://httpbin.org/put") >>> r = Requests.delete ("Http://httpbin.org/delete") >>> r = Requests.head ("Http://httpbin.org/get" ) >>> r = requests.options ("Http://httpbin.org/get")

Problems encountered:

Executed under CMD, encountered a minor error:

Unicodeencodeerror: ' GBK ' codec can ' t encode character U ' \xbb ' in position 23460:illegal multibyte sequence

Analysis:
1. Unicode is encoding or decoding

Unicodeencodeerror

Obviously, there was an error in the code.

2. What code is used

' GBK ' codec can ' t encode character

Error using GBK encoding

Workaround:

Determines the current string, such as

#-*-coding:utf-8-*-import requestsurl = ' www.baidu.com ' r = Requests.get (URL) print R.encoding>utf-8

Having determined that the HTML string is Utf-8, you can simply go through the utf-8 to encode it.

Print R.text.encode (' Utf-8 ')

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.