python-Network Crawler (requests library Main method analysis)

Source: Internet
Author: User
Tags http authentication ssl certificate

Analysis of main methods of requests library
Requests.request () Constructs a request that supports the underlying methods of the following methods

requests.request (method, URL, **kwargs)
Method: request method, corresponding to Get/put/post and other seven kinds
URL: URL link to get page
**kwargs: A total of 13 parameters for control access.
requests.request (' GET ', url, **kwargs)
requests.request (' HEAD ', url, **kwargs)
requests.request (' POST ', url, **kwargs)
requests.request (' PUT ', url, **kwargs)
requests.request (' PATCH ', url, **kwargs)
requests.request (' DELETE ', url, **kwargs)
requests.request (' OPTIONS ', url, **kwargs)
The options are just one parameter to HTTP communication, independent of the HTTP protocol, and are seldom used

requests.request (method, URL, **kwargs)
**kwargs: Controls access to parameters, all optional
params: Dictionary or sequence of bytes, added as a parameter to the URL
>>> kv={' key1 ': ' value1 ', ' Key2 ': ' value2 '}
>>> r= Requests.request (' GET ', ' Http://python123.io/ws ', params=kv)
> >> print (r.url)
http://python123.io/ws?key1=value1&key2=value2

data: Dictionary, byte sequence, or file object as content of request
>>> kv={' key1 ': ' value1 ', ' key2 ': ' Value2 '}
>>> r= Requests.request (' POST ', ' Http://python123.io/ws ', data=kv)
> >> body= ' body content '
>>>< Span style= "COLOR: #ff0000" > R.requests.request (' POST ', ' Http://python123.io/ws ', data=body)

Data in Json:json format as content of request
>>> kv={' key1 ': ' value1 '}
>>> r=requests.request (' POST ', ' Http://python123.io/ws ', json=kv)

Headers: dictionary, HTTP Custom header
>>>hd={' user-agent ': ' CHROME/10 '}
>>>r=requests.request (' POST ', ' Http://python123.io/ws ', HEADERS=HD)

Cookies: Dictionaries or Cookiejar, cookies in Request
Auth: Tuple, support HTTP authentication function
Files: dictionary type, transferring files
>>>fs={' file ': Open (' Data.xls ', ' RB ')
>>>r=requests.request (' POST ', ' Http://python123.io/ws ', files=fs)

Timeout: Sets the timeout, in seconds
>>>r=requests.request (' GET ', ' http://www.baidu.com ', timeout=10)

Proxies: Dictionary type, set Access Proxy, can increase login authentication
>>>pxs={' http ': ' Http://user:[email protected]:1234 ', ' https ': ' https://10.10.10.1:4321 '}
>>>r=requests.request (' GET ', ' http://www.baidu.com ', PROXIES=PXS)

Allow_redirects:true/false, default true, redirect switch
Stream:true/false, the default is True, gets the content download now switch
Varify:true/false, default is True, authentication SSL certificate switch
Cert: Local SSL certificate path

requests.request (Method,url,**kwargs)
**kwargs: Parameters for Control access (13), optional
params
data
JSON
headers
cookies
auth
files
timeout
proxies
allow_redirects
stream
verify
cert

Requests.get (URL, Params=none,**kwargs)
URL: URL link to get page
Additional parameters in the Params:url, dictionary or byte stream format, optional.
**kwargs:12 a Control access parameter (which corresponds to the parameter except the params parameter in the request)

Requests.head (Url,**kwargs)
URL: URL link to get page
**kwargs:13 parameters for control access (exactly the same as request)

Requests.post (Url,data=none, Json=none,**kwargs)
URL: URL link to get page
Data: Dictionary, byte sequence, or file, content of request
Json:json format data, request content
**kwargs:11 Parameters for control access

Requests.put (Url,data=none,**kwargs)
URL: URL link to get page
Data: Dictionary, byte sequence, or file, content of request
**kwargs:12 a control access parameter.

Requests.patch (Url,data=none,**kwargs)
URL: URL link to get page
Data: Dictionary, byte sequence, or file, content of request
**kwargs:12 a control access parameter.

Requests.delete (Url,**kwargs)
URL: URL link to get page
**kwargs:13 a parameter that controls access.

python-Network Crawler (requests library Main method analysis)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.