0 Basic Writing Python crawler urllib2 use guide _python

Source: Internet
Author: User
Tags epoch time http request soap

In front of the Urllib2 simple introduction, the following collation of a part of the use of urllib2 details.

setting of 1.Proxy

URLLIB2 uses environment variable HTTP_PROXY to set HTTP proxy by default.
If you want to explicitly control the proxy in your program without being affected by the environment variables, you can use the proxy.
Create a new test14 to implement a simple proxy demo:

Copy Code code as follows:

Import Urllib2
Enable_proxy = True
Proxy_handler = Urllib2. Proxyhandler ({"http": ' http://some-proxy.com:8080 '})
Null_proxy_handler = Urllib2. Proxyhandler ({})
If Enable_proxy:
Opener = Urllib2.build_opener (Proxy_handler)
Else
Opener = Urllib2.build_opener (Null_proxy_handler)
Urllib2.install_opener (opener)

One detail to note here is that using Urllib2.install_opener () sets the URLLIB2 global opener.
This will be convenient to use later, but can not do more detailed control, such as in the program to use two different Proxy settings.
It is a good practice to change the global setting without using Install_opener, instead of simply calling the opener open method instead of the global Urlopen method.

2.Timeout settings

In the old Python (Python2.6), the Urllib2 API did not expose Timeout settings, and to set the Timeout value, only the Socket's global Timeout value could be changed.

Copy Code code as follows:

Import Urllib2
Import socket
Socket.setdefaulttimeout (10) # 10 seconds after timeout
Urllib2.socket.setdefaulttimeout (10) # Another way

After Python 2.6, timeouts can be set directly through the timeout parameters of Urllib2.urlopen ().

Copy Code code as follows:

Import Urllib2
Response = Urllib2.urlopen (' http://www.google.com ', timeout=10)

3. Add a specific Header to the HTTP Request

To join the header, you need to use the Request object:

Copy Code code as follows:

Import Urllib2
Request = Urllib2. Request (' http://www.baidu.com/')
Request.add_header (' user-agent ', ' fake-client ')
Response = Urllib2.urlopen (Request)
Print Response.read ()

For some headers to pay special attention, the server will check for these headers
User-agent: Some servers or proxies will use this value to determine whether the browser is making a request
Content-type: When using the REST interface, the server checks the value to determine how the contents of the HTTP body are parsed. The common values are:
Application/xml: Used when XML RPC, such as a restful/soap call
Application/json: Used when JSON RPC calls
application/x-www-form-urlencoded: Use when browsers submit Web forms
When using a server-supplied RESTful or SOAP service, content-type Setup errors can cause the server to deny service

4.Redirect

URLLIB2 automatically redirect actions for HTTP 3XX return codes by default, without manual configuration. To detect if a redirect action has occurred, just check the Response url and the URL of the Request to be consistent.

Copy Code code as follows:

Import Urllib2
My_url = ' http://www.google.cn '
Response = Urllib2.urlopen (My_url)
redirected = Response.geturl () = = My_url
Print redirected
My_url = ' Http://rrurl.cn/b1UZuP '
Response = Urllib2.urlopen (My_url)
redirected = Response.geturl () = = My_url
Print redirected

If you do not want to automatically redirect, you can customize the Httpredirecthandler class in addition to using a lower-level httplib library.

Copy Code code as follows:

Import Urllib2
Class Redirecthandler (Urllib2. Httpredirecthandler):
def http_error_301 (self, req, FP, code, MSG, headers):
Print "301"
Pass
def http_error_302 (self, req, FP, code, MSG, headers):
Print "303"
Pass
Opener = Urllib2.build_opener (Redirecthandler)
Opener.open (' Http://rrurl.cn/b1UZuP ')

5.Cookie

URLLIB2 processing of cookies is also automatic. If you need to get a value for a Cookie item, you can do this:

Copy Code code as follows:

Import Urllib2
Import Cookielib
Cookie = Cookielib. Cookiejar ()
Opener = Urllib2.build_opener (urllib2. Httpcookieprocessor (Cookie))
Response = Opener.open (' http://www.baidu.com ')
For item in Cookie:
print ' Name = ' +item.name
print ' Value = ' +item.value

After the run will output access to Baidu's cookie value:

6. Use the Put and DELETE method of HTTP

URLLIB2 only supports HTTP GET and POST methods, and only the lower-level httplib libraries are used if you want to use HTTP put and DELETE. Nonetheless, we are able to enable URLLIB2 to send a put or delete request in the following way:

Copy Code code as follows:

Import Urllib2
Request = Urllib2. Request (URI, Data=data)
Request.get_method = lambda: ' Put ' # or ' DELETE '
Response = Urllib2.urlopen (Request)

7. Get HTTP Return code

For OK, the return code of HTTP can be obtained only by using the GetCode () method of the response object returned by Urlopen. But for other return codes, Urlopen throws an exception. At this point, you need to check the code attribute of the exception object:

Copy Code code as follows:

Import Urllib2
Try
Response = Urllib2.urlopen (' http://bbs.csdn.net/why ')
Except Urllib2. Httperror, E:
Print E.code

8.Debug Log

When using URLLIB2, you can open the debug Log in the following way, so that the contents of the packet will be printed on the screen, convenient debugging, and sometimes can save the work of grasping the bag

Copy Code code as follows:

Import Urllib2
HttpHandler = Urllib2. HttpHandler (debuglevel=1)
Httpshandler = Urllib2. Httpshandler (debuglevel=1)
Opener = Urllib2.build_opener (HttpHandler, Httpshandler)
Urllib2.install_opener (opener)
Response = Urllib2.urlopen (' http://www.google.com ')

This allows you to see the contents of the packets being transmitted:

9. Processing of forms

How to fill in the form when you need to sign in?
First, use the tool to intercept the contents of the form.
For example, I usually use the Firefox+httpfox plugin to see what kind of packets I sent.
Take VERYCD as an example, first find the POST request that you sent, and the Post form item.
You can see VERYCD words need to fill in the username,password,continueuri,fk,login_submit of these items, where the FK is randomly generated (in fact, not too random, looks like the epoch time through a simple code generation), Need to get from the Web page, which means you have to visit the Web page first, using tools such as regular expressions to intercept the FK entries in the returned data. Continueuri as the name suggests you can write casually, login_submit is fixed, which can be seen from the source. And Username,password, that's obvious:

Copy Code code as follows:

#-*-Coding:utf-8-*-
Import Urllib
Import Urllib2
Postdata=urllib.urlencode ({
' username ': ' Wang Xiaoguang ',
' Password ': ' why888 ',
' Continueuri ': ' http://www.verycd.com/',
' FK ': ',
' Login_submit ': ' Login '
})
req = Urllib2. Request (
url = ' Http://secure.verycd.com/signin ',
data = PostData
)
result = Urllib2.urlopen (req)
Print Result.read ()

10. Disguised as browser access

Some Web sites resent the arrival of reptiles, and all refuse requests for reptiles.
At this time we need to disguise as a browser, this can be done by modifying the header in the HTTP package

Copy Code code as follows:

#...

headers = {
' User-agent ': ' mozilla/5.0 (Windows; U Windows NT 6.1; En-us; rv:1.9.1.6) gecko/20091201 firefox/3.5.6 '
}
req = Urllib2. Request (
url = ' http://secure.verycd.com/signin/*/http://www.verycd.com/',
data = PostData,
headers = Headers
)
#...

11. Dealing with "anti-hotlinking"

Some sites have the so-called anti-hotlinking settings, in fact, it is very simple,
is to check the header you send the request inside, the Referer site is not his own,
So we just need to change the headers referer to the site, take Cnbeta as an example:
#...
headers = {
' Referer ': ' Http://www.cnbeta.com/articles '
}
#...
Headers is a DICT data structure, you can put in any desired header, to do some camouflage.
For example, some websites like to read the x-forwarded-for of the header to see the real IP, you can directly change the x-forwarde-for.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.