Python Crawler Development Series three "requests request library use

Source: Internet
Author: User

Requestsis a practical, simple and powerful Python HTTP client library that is often used when writing crawlers and testing server response data. requests can fully meet the needs of today's network. Next we start with the most basic get POST request to advanced feature step by step to learn. Learning is a gradual process, only down-to-earth practice to master these important points of knowledge.

First, send the request Get/post

In accordance with the regulations, the first import of the requests module

Import requests

R=requests.get (' https://www.baidu.com ')

R=requests.post (' Http://httpbin.org/post ', data={' key ': ' Value '})

The above is just a line of code to complete the get/post request, beautiful elegance, of course, this is just the tip of the iceberg requests.

Second, transfer parameters

In large system projects, our URLs are not so simple, often to pass a lot of query parameters, usually in the way of key-value pairs written in the URL (with a GET request as an example), such as https://www.baidu.com/get?key=val& Name=myname

Requests allows us to use the params keyword to pass arguments using a dictionary, for example:

pram={' key1 ': ' Val1 ', ' key2 ': ' Val2 '}

R=requests.get (' Https://www.baidu.com/get ', Params=pram)

Print (R.url)---printed out is Https://www.baidu.com/get? Key1=val1&key2=val2

In addition to the above, we can also pass a list in:

pram={' key1 ': ' Val1 ', ' key2 ': ' val2 ', ' list ': [Value1,value2]}

R=requests.get (' Https://www.baidu.com/get ', Params=pram)

Print (R.url)---printed out is Https://www.baidu.com/get? Key1=val1&key2=val2&list=value1&list=value2

The above is a sample of GET request parameters---Post is also the same operation R=requests.post (' Https://www.baidu.com/get ', Data=pram).

Third, get the response content

R=requests.get (' https://www.baidu.com/')

R.text (text response content) or r.content (binary response content) or R.content.json () (JSON response content), images acquired in general, MP3 ... When you need to save as a local file, the operation is simple:

R=requests.get (' Https://www.baidu.com/fa.ico ')

With open (' Fa.ico ', ' WB ') as FB:

Fb.write (r.content)

Iv. Customizing the request header

url = ' Https://api.github.com/some/endpoint '

headers = {' user-agent ': ' my-app/0.0.1 '}

r = Requests.get (URL, headers=headers)

V. Cookies

Cookie to get response information

url = ' Http://example.com/some/cookie/setting/url '

r = Requests.get (URL)

r.cookies[' Example_cookie_name ']

Submit cookie to Server

url = ' Http://httpbin.org/cookies '

cookies = dict (cookies_are= ' working ')

r = Requests.get (URL, cookies=cookies)

VI. Timeout

R=requests.get (' http://github.com ', timeout=5)

Timeout is only valid for the connection process and is not related to the download of the response body. Timeout is not a time limit for the entire download response, but if the server does not answer within timeout seconds, an exception will be thrown (more precisely, when no byte data is received from the underlying socket in timeout seconds)

--------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------Perfect Dividing line

First, the agent

Import requests

Proxies = {
"http": "http://10.10.1.10:3128",
"https": "http://10.10.1.10:1080",
}

Requests.get ("http://example.org", proxies=proxies)

If your proxy needs to use HTTP Basic Auth, you can use the Http://user:[email protected]/syntax:

Proxies = {
"http": "Http://user:[email protected]:3128/",
}

------------------------------------------------------------------------the next article will explain in detail the use of the Analytic library please look forward to

Python Crawler Development Series three "requests request library use

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.