Python requests installation and simple application

Source: Internet
Author: User
Requests is a Python HTTP client library, similar to URLLIB,URLLIB2, why use requests instead of URLLIB2? This is stated in the official documentation:

Python's standard library URLLIB2 provides most of the HTTP functionality needed, but the API is too counter-trivial, and a simple function requires a lot of code.

I also read the requests document, which is really very simple, suitable for my lazy people. Here are some simple guides.

A good news! Just see requests have a Chinese translation version, suggest English bad look, content also better than my blog, the specific link is: http://cn.python-requests.org/en/latest/(but v1.1.0 version, another sorry, before the wrong link )。

1. Installation

Installation is very simple, I am win system, here Download the installation package (download the Zipball link in the webpage), and then the $ Python setup.py install is installed.

Of course, a friend with Easy_install or PIP can use it directly: Easy_install requests or PIP install requests.
As for Linux users, this page also has other installation methods.

Test: Enter import requests in idle, if not prompt error, that means the installation is successful!

2. Small trial Sledgehammer

>>>import requests>>> r = requests.get (' http://www.zhidaow.com ') # Send request >>> R.status_code # Return code 200>>> r.headers[' Content-type '] # return header information ' text/html; Charset=utf8 ' >>> r.encoding # encoded information ' utf-8 ' >>> r.text #内容部分 (PS, due to coding problems, it is recommended to use r.content here) U '\ n

Isn't it simple? More simple and intuitive than URLLIB2 and urllib?! Then go ahead and see the quick Guide.

3. Quick Guide

3.1 Sending requests

To send the request is simple, first import the requests module:

>>>import Requests

Next, let's get a webpage, such as the homepage of my personal blog:

>>>r = Requests.get (' http://www.zhidaow.com ')

Next, we can use the various methods and functions of the R.

In addition, there are many types of HTTP requests, such as Post,put,delete,head,options. can also be implemented in the same way:

>>> r = requests.post ("Http://httpbin.org/post") >>> r = requests.put ("Http://httpbin.org/put") > >> r = Requests.delete ("Http://httpbin.org/delete") >>> r = Requests.head ("Http://httpbin.org/get") >>> r = requests.options (http://httpbin.org/get)

Because I don't have any of this at the moment, so I didn't go into it.

3.2 Passing parameters in URLs

Sometimes we need to pass parameters in the URL, such as when we collect Baidu search results, we WD parameters (search terms) and RN parameters (the number of search results), you can manually compose Url,requests also provides a look very NB method:

>>> payload = {' WD ': ' Zhang Yanan ', ' rn ': ' + '}>>> r = Requests.get ("http://www.baidu.com/s", Params=payload) >>> print R.urlu ' http://www.baidu.com/s?rn=100&wd=%E5%BC%A0%E4%BA%9A%E6%A5%A0 '

The above wd= garbled is the "Zhang Yanan" transcoding form. (It seems that the parameters are sorted by the initial letter.) )

3.3 Getting response content

You can use R.text to get the contents of a Web page.

>>> r = requests.get (' https://www.zhidaow.com ') >>> r.textu '\ n

The document says that requests will automatically transcode the content. Most Unicode fonts will be transcoded seamlessly. But I always appear unicodeencodeerror error when I use Cygwin, depressed. It's perfectly normal in Python's idle.
In addition, the page content can be obtained through r.content.

>>> r = requests.get (' https://www.zhidaow.com ') >>> r.contentb '\ n

The document says that R.content is displayed in bytes, so start with B in idle. But I did not use it in Cygwin, download the page just right. So it replaces the urllib2 urllib2.urlopen (URL). Read () function. (Basically I use the most of a feature.) )

3.4 Get page encoding

You can use r.encoding to get the page encoding.

>>> r = requests.get (' http://www.zhidaow.com ') >>> r.encoding ' Utf-8 '

When you send a request, requests guesses the page encoding based on the HTTP header, and requests uses that code when you use R.text. Of course you can also modify the requests encoding.

>> r = requests.get (' http://www.zhidaow.com ') >>> r.encoding ' utf-8 ' >>>r.encoding = ' Iso-8859-1 '

Like the example above, the modified code will be used to get the content of the Web page directly after encoding.

3.5 JSON

Like Urllib and URLLIB2, if JSON is used, new modules, such as JSON and Simplejson, are introduced, but in requests there is already a built-in function, R.json (). Take the API for querying IP:

>>>r = Requests.get (' http://ip.taobao.com/service/getIpInfo.php?ip=122.88.60.28 ') >>>r.json () [' Data ' [' Country '] ' China '

3.6 Page Status Code

We can use R.status_code to check the status code of the webpage.

>>>r = Requests.get (' http://www.mengtiankong.com ') >>>r.status_code200>>>r = Requests.get (' http://www.mengtiankong.com/123123/') >>>r.status_code404>>>r = Requests.get (' Http://www.baidu.com/link?url=QeTRFOS7TuUQRppa0wlTJJr6FfIYI1DJprJukx4Qy0XnsDO_s9baoO8u1wvjxgqN ') >>> R.urlu ' http://www.zhidaow.com/>>>r.status_code200

The first two examples are normal, can normally open return 200, does not open normally return 404. But the third one is a bit strange, that is Baidu search results in the 302 jump address, but the status code display is 200, then I used a trick let him show his true colours:

>>>r.history (
 
     
      
  ,)
 
     

Here it can be seen that he is using a 302 jump. Perhaps some people think that this can be judged and regular to get the status code of the jump, in fact, there is a simpler way:

>>>r = Requests.get (' http://www.baidu.com/link?url=QeTRFOS7TuUQRppa0wlTJJr6FfIYI1DJprJukx4Qy0XnsDO_ S9baoo8u1wvjxgqn ', allow_redirects = False) >>>r.status_code302

Just add a parameter allow_redirects, prohibit jump, directly appear jump status code, easy to use it? I also used this in the last one to do a simple to get the page status code small application, the principle is this.

3.7 Response Header Content

The response header content can be obtained through r.headers.

>>>r = Requests.get (' http://www.zhidaow.com ') >>> r.headers{' content-encoding ': ' gzip ', ' Transfer-encoding ': ' chunked ', ' content-type ': ' text/html; Charset=utf-8 '; ...}

You can see that everything is returned as a dictionary, and we can also access some of the content.

>>> r.headers[' content-type '] ' text/html; Charset=utf-8 ' >>> r.headers.get (' content-type ') ' text/html; Charset=utf-8 '

3.8 Setting the time-out period

We can set the time-out through the Timeout property, and will prompt for an error if the response has not been received at this time.

>>> requests.get (' http://github.com ', timeout=0.001) Traceback (most recent call last): File "
 
     
      
  ", Line 1, in 
  
      
       
   requests.exceptions.Timeout:HTTPConnectionPool (host= ' github.com ', port=80): Request timed Out. (timeout=0.001)
  
      
 
     

3.9 Delegate Access

At the time of collection to avoid being blocked IP, agent is often used. Requests also has the corresponding proxies attribute.

Import requestsproxies = {"http": "http://10.10.1.10:3128", "https": "http://10.10.1.10:1080",}requests.get ("/HTTP/ Www.zhidaow.com ", proxies=proxies)

This is required if the agent requires an account and password:

Proxies = {"http": "http://user:pass@10.10.1.10:3128/",}

3.10 Request Header Content

Request header content can be obtained using r.request.headers.

>>> r.request.headers{' accept-encoding ': ' Identity, deflate, compress, gzip ', ' Accept ': ' */* ', ' user-agent ': ' python-requests/1.2.3 cpython/2.7.3 windows/xp '}

3.11 Customizing the request header

The disguise request header is often used when collecting, we can use this method to hide:

r = Requests.get (' http://www.zhidaow.com ') print r.request.headers[' user-agent '] #python-requests/1.2.3 cpython/ 2.7.3 windows/xpheaders = {' user-agent ': ' alexkh '}r = Requests.get (' http://www.zhidaow.com ', headers = headers) Print r.request.headers[' user-agent '] #alexkh

3.12 Persistent Connection Keep-alive

The keep-alive of requests is based on URLLIB3, and the persistent connection within the same session is completely automatic. All requests within the same session will automatically use the appropriate connection.

That is, you do not need any settings, requests will automatically implement Keep-alive.

4. Simple Application

4.1 Get the page return code

def get_status (URL): r = Requests.get (URL, allow_redirects = False) return r.status_codeprint get_status ('/HTTP/ Www.zhidaow.com ') #200print get_status (' http://www.zhidaow.com/hi404/') #404print get_status (' HTTP// Mengtiankong.com ') #301print get_status (' http://www.baidu.com/link?url= Qetrfos7tuuqrppa0wltjjr6ffiyi1djprjukx4qy0xnsdo_s9baoo8u1wvjxgqn ') #302print get_status (' http://www.huiya56.com/ Com8.intre.asp?46981.html ') #500

The above is for the Python requests installation and simple application of the introduction, we hope to help!

Script Home Recommended reading:

Getting Started with Python requests basics

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.