Python crawler-Python requests network request concise Way

Source: Internet
Author: User

http://blog.csdn.net/pipisorry/article/details/48086195

Requests Introduction

Requests is an HTTP client library for Python, similar to urllib,urllib2, but Python's standard library URLLIB2 provides most of the HTTP functionality needed, but the API is too counter-clockwise, and a simple function requires a lot of code.

Requests uses URLLIB3, so it inherits all of its features. Requests supports HTTP connection retention and connection pooling, supports the use of cookies to maintain sessions, supports file uploads, supports automatic encoding of response content, supports internationalized URLs and automatic POST data encoding. Modern, international, humane.

Requests features
Requests fully meets the needs of today's network:
Internationalized Domain Names and URLs
Keep-alive & Connection Pool
Persistent Cookie Sessions
Class Browser-type SSL encryption authentication
Basic/digest type of identity authentication
Elegant key/value Cookies
Automatic decompression
Unicode-encoded response body
Multi-segment File upload
Connection timed out
Support. NETRC
For Python 2.6-3.4
Thread Safety

Phi Blog



Comparison of requests and Python with Urllib

Py2:

#!/usr/bin/env python#-*-coding:utf-8-*-import urllib2gh_url = ' https://api.github.com ' req = urllib2. Request (gh_url) Password_manager = Urllib2. Httppasswordmgrwithdefaultrealm () Password_manager.add_password (None, Gh_url, ' user ', ' pass ') Auth_manager = Urllib2. Httpbasicauthhandler (password_manager) opener = Urllib2.build_opener (Auth_manager) Urllib2.install_opener (opener) Handler = Urllib2.urlopen (req) print handler.getcode () print handler.headers.getheader (' Content-type ') #------# 200# ' Application/json '
Requests:

#!/usr/bin/env python#-*-coding:utf-8-*-import requestsr = requests.get (' https://api.github.com ', auth= (' User ', ' Pass ')) print r.status_codeprint r.headers[' content-type ']#------# 200# ' Application/json '
Phi Blog

[Urllib2 vs Requests]



Requests using the Chestnuts

Installation

PIP Install requests


Basic use

  >>>import Requests
>>> r = requests.get (' http://www.****.com ') # Send request
>>> R.status_code # return code 200
>>> r.headers[' content-type ' # return header information ' text/html; Charset=utf8 '
>>> r.encoding # Encoded information ' Utf-8 '
>>> r.text #内容部分 (R.content can also be used if there is a coding problem)
U ' <! DOCTYPE html>\n


A variety of different HTTP requests

>>> r = requests.post ("Http://httpbin.org/post")
>>> r = requests.put ("Http://httpbin.org/put")
>>> r = Requests.delete ("Http://httpbin.org/delete")
>>> r = Requests.head ("Http://httpbin.org/get")
>>> r = requests.options ("Http://httpbin.org/get")


Request with parameters

  >>> payload = {' WD ': ' Zhang Yanan ', ' rn ': ' 100 '}
>>> r = Requests.get ("http://www.baidu.com/s", Params=payload)
>>> Print R.url
U ' http://www.baidu.com/s?rn=100&wd=%E5%BC%A0%E4%BA%9A%E6%A5%A0 '

Note: The params here does not have to be urlencode on its own.


Get JSON results

  >>>r = Requests.get (' ... ')  >>>r.json () [' Data '] [' country ']  ' China '
Phi Blog


Python3 HTTPLIB2

But little python3 to tell you another way to make a simple Web request

Import httplib2h = Httplib2. Http (". Cache") h.add_credentials (' User ', ' pass ') r, content = H.request ("https://api.github.com", "GET") print r[' Status ']print r[' Content-type ']
Note: It is also a few lines of code equivalent to requests Ah! [Urllib2 vs Requests]

from:http://blog.csdn.net/pipisorry/article/details/48086195

Ref:Requests:HTTP for humans


Copyright NOTICE: This article for Bo Master http://blog.csdn.net/pipisorry original article, without Bo Master permission not reproduced.

Python crawler-Python requests network request concise Way

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.