Python's post request fetches data

Source: Internet
Author: User
Tags html form urlencode

Python sends HTTP requests and receives HTTP responses via get, post-urllib urllib2Python sends HTTP requests and receives HTTP responses via get mode, post -Import urllib module, URLLIB2 module, httplib module

Http://blog.163.com/[email protected]/blog/static/132229655201231085444250/

The test CGI, named test.py, is placed under Apache's Cgi-bin directory:
#!/usr/bin/python
Import CGI
def main ():
Print "Content-type:text/html\n"
form = CGI. Fieldstorage ()
If Form.has_key ("Servicecode") and form["Servicecode"].value! = "":
Print "Else
Print "Main ()

Python sends Post and GET requests

GET Request:

When you use the Get method, the request data is placed directly in the URL.
Method One,
Import Urllib
Import Urllib2

url = "Http://192.168.81.16/cgi-bin/python_test/test.py?ServiceCode=aaaa"

req = Urllib2. Request (URL)
Print Req

Res_data = Urllib2.urlopen (req)
res = Res_data.read ()
Print Res

Method Two,
Import Httplib

url = "Http://192.168.81.16/cgi-bin/python_test/test.py?ServiceCode=aaaa"

conn = Httplib. Httpconnection ("192.168.81.16")
Conn.request (method= "GET", Url=url)

Response = Conn.getresponse ()
res= Response.read ()
Print Res

POST request:

When using the Post method, the data is placed in the database or the body, and cannot be placed in the URL and will be ignored in the URL.
Method One,
Import Urllib
Import Urllib2

Test_data = {' Servicecode ': ' AAAA ', ' B ': ' BBBBB '}
Test_data_urlencode = Urllib.urlencode (test_data)

Requrl = "http://192.168.81.16/cgi-bin/python_test/test.py"

req = Urllib2. Request (url = requrl,data =test_data_urlencode)
Print Req

Res_data = Urllib2.urlopen (req)
res = Res_data.read ()
Print Res


Method Two,
Import Urllib
Import Httplib
Test_data = {' Servicecode ': ' AAAA ', ' B ': ' BBBBB '}
Test_data_urlencode = Urllib.urlencode (test_data)

Requrl = "http://192.168.81.16/cgi-bin/python_test/test.py"
Headerdata = {"Host": "192.168.81.16"}

conn = Httplib. Httpconnection ("192.168.81.16")

Conn.request (method= "POST", url=requrl,body=test_data_urlencode,headers = Headerdata)

Response = Conn.getresponse ()

res= Response.read ()

Print Res
The use of JSON in Python is unclear, so the Urllib.urlencode (Test_data) method is used temporarily.

The difference between module Urllib,urllib2,httplib
Httplib implements the client protocol for HTTP and HTTPS, but in Python, the modules Urllib and URLLIB2 have a higher-level encapsulation of httplib.

Describe the functions used in the following example:
1. httpconnection function

Httplib. Httpconnection (Host[,port[,stict[,timeout]])
This is a constructor that represents a single interaction with the server, that is, the request/response
Host identifies server hosts (server IP or domain name)
Port default value is 80
Strict mode is false, indicating that the state row returned by the server cannot be resolved, and if the Badstatusline exception is thrown
For example:
conn = Httplib. Httpconnection ("192.168.81.16", 80) establishes a link with the server.


2, Httpconnection.request (Method,url[,body[,header]]) function
This is sending a request to the server
Method request, typically a post or get,

For example:

Method= "POST" or method= "Get"
URL request resource, requested resource (page or CGI, we are CGI here)

For example:

Url= "http://192.168.81.16/cgi-bin/python_test/test.py" request CGI

Or

Url= "http://192.168.81.16/python_test/test.html" Request page
The body needs to submit data to the server, either in JSON or in the format above, JSON needs to call the JSON module
HTTP header for headers request Headerdata = {"Host": "192.168.81.16"}
For example:
Test_data = {' Servicecode ': ' AAAA ', ' B ': ' BBBBB '}
Test_data_urlencode = Urllib.urlencode (test_data)
Requrl = "http://192.168.81.16/cgi-bin/python_test/test.py"
Headerdata = {"Host": "192.168.81.16"}
conn = Httplib. Httpconnection ("192.168.81.16", 80)
Conn.request (method= "POST", url=requrl,body=test_data_urlencode,headers = Headerdata)
Conn after use, should be closed, conn.close ()


3. Httpconnection.getresponse () function
This is the get HTTP response, and the returned object is an instance of HttpResponse.


4, HttpResponse Introduction:
The properties of the HttpResponse are as follows:
Read ([Amt]) Gets the body of the response message that the AMT represents reads the specified bytes of data from the response stream, without specifying the entire data;
GetHeader (Name[,default]) received a response header,name is the header domain name, when there is no header domain name, the default is used to specify the return value
Getheaders () Get the header in the form of a list
For example:

Date=response.getheader (' Date ');
Print Date
Resheader= "
Resheader=response.getheaders ();
Print Resheader

Response header information in column form:

[(' Content-length ', ' 295 '), (' accept-ranges ', ' bytes '), (' Server ', ' Apache '), (' last-modified ', ' Sat, Mar 2012 10:07:0 2 GMT '), (' Connection ', ' close '), (' ETag ', ' e8744-127-4bc871e4fdd80 '), (' Date ', ' Mon, Sep 10:01:47 GMT '), (' Cont Ent-type ', ' text/html ')]

Date=response.getheader (' Date ');
Print Date

Remove the value of the date of the response header.

*************************************************************************************************************** *************************************************************************************************************** ************************

The so-called Web crawl, is the URL address specified in the network resources from the network stream read out, save to Local.
Similar to the use of the program to simulate the function of IE browser, the URL as the content of the HTTP request to the server side, and then read the server-side response resources.

In Python, we use the URLLIB2 component to crawl Web pages.
URLLIB2 is a component of Python's acquisition of URLs (Uniform Resource Locators).

It provides a very simple interface in the form of a urlopen function.

The simplest URLLIB2 application code requires only four rows.

Let's create a new file urllib2_test01.py to feel the Urllib2 effect:

Import Urllib2
Response = Urllib2.urlopen (' http://www.baidu.com/')
html = Response.read ()
Print HTML


Press F5 to see the results of the run:

We can open the Baidu homepage, right click, choose to view the source code (Firefox or Google browser can), will find the exact same content.

That is, the above four lines of code to our visit to Baidu when the browser received the code are all printed out.

This is one of the simplest examples of urllib2.

In addition to "http:", URLs can also be replaced with "ftp:", "File:" And so on.

HTTP is based on the request and response mechanism:

The client presents a request and the server provides a response.

URLLIB2 uses a Request object to map the HTTP request you made.

In its simplest form of use, you will create a request object with the address you want,

By calling Urlopen and passing in the request object, a related request response object is returned.

This response object is like a file object, so you can call. Read () in response.

Let's create a new file urllib2_test02.py to feel:





Print The_page

You can see that the output content is the same as the test01.

URLLIB2 uses the same interface to handle all URL headers. For example, you can create an FTP request as follows.

req = Urllib2. Request (' ftp://example.com/')

There are two additional things that you are allowed to do in an HTTP request.

1. Sending data forms

This content is believed to have done the web side is not unfamiliar,

Sometimes you want to send some data to the URL (usually URL with the cgi[Universal Gateway Interface] script, or another Web application to hook up).

In HTTP, this is often sent using a well-known post request.

This is usually done by your browser when you submit an HTML form.

Not all posts are sourced from the form, and you can use post to submit arbitrary data to your own program.

In general HTML forms, data needs to be encoded in standard form. The data parameter is then passed to the request object.

Coding works using Urllib functions rather than urllib2.

Let's create a new file urllib2_test03.py to feel:







data = Urllib.urlencode (values) # coding work
req = Urllib2. Request (URL, data) # to send requests to transmit data form simultaneously
Response = Urllib2.urlopen (req) #接受反馈的信息
The_page = Response.read () #读取反馈的内容

If the data parameter is not transferred, URLLIB2 uses the Get method request.

The difference between get and post requests is that post requests usually have "side effects",

They can change the state of the system in some way (for example, by submitting piles of rubbish to your doorstep).

Data can also be transmitted by encoding the URL itself on the GET request.


Import Urllib
data = {}


data[' language ' = ' Python '

Print Url_values


Full_url = URL + '? ' + url_values

This enables the get transfer of data.

2. Set headers to HTTP request

Some sites do not like to be accessed by programs (non-human access) or to send different versions of content to different browsers.

The default URLLIB2 takes itself as "python-urllib/x.y" (x and Y are Python major and minor version numbers, such as python-urllib/2.7),

This identity may confuse the site, or simply not work.

The browser confirms its identity through the user-agent header, and when you create a request object, you can give him a dictionary containing the header data.

The following example sends the same content as above, but simulates itself as an internet Explorer.

(Thank you for reminding me that this demo is not available now, but the principle is still the same).



url = ' http://www.someserver.com/cgi-bin/register.cgi '








The above is Python use urllib2 through the specified URL to crawl the content of the Web page, it is very simple, I hope to be helpful to everyone

Source: http://www.cnblogs.com/poerli/p/6429673.html

Python's post request fetches data

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.