[Python] web crawler (4): Introduction of Opener and Handler and instance applications

Source: Internet
Author: User
Before starting the following content, let's first explain the two methods in urllib2: infoandgeturl. before starting the following content, let's explain the two methods in urllib2: info and geturl.

The response object response (or HTTPError instance) returned by urlopen has two useful methods: info () and geturl ()

1. geturl ():

This returns the obtained real URL, which is useful because urlopen (or used by the opener object) may be redirected. The obtained URL may be different from the requested URL.

Take a hyperlink from a person as an example,


We will create a urllib2_test10.py file to compare the original URL with the redirection link:

from urllib2 import Request, urlopen, URLError, HTTPError      old_url = 'http://rrurl.cn/b1UZuP'  req = Request(old_url)  response = urlopen(req)    print 'Old url :' + old_url  print 'Real url :' + response.geturl()

After running the command, you can see the URL that the real link points:

We will create a urllib2_test11.py to test the info application:

from urllib2 import Request, urlopen, URLError, HTTPError    old_url = 'http://www.baidu.com'  req = Request(old_url)  response = urlopen(req)    print 'Info():'  print response.info()

The running result is as follows:

We will create a urllib2_test12.py to test the info application:

#-*-Coding: UTF-8-*-import urllib2 # Create a password manager password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm () # add the user name and password top_level_url = "http://example.com/foo/" # If you know realm, we can use it to replace ''non ''. # define (None, top_level_url, username, password) password_mgr.add_password (None, top_level_url, 'why', '000000') # Create a new handler = handler (password_mgr) # create an "opener" (OpenerDirector instance) opener = urllib2.build _ opener (handler) a_url = 'http: // www.baidu.com/'# Use opener to obtain a URL opener. open (a_url) # install opener. # Now all calls to urllib2.urlopen will use our opener. urllib2.install _ opener (opener)

Note: In the preceding example, we only provide our HHTPBasicAuthHandler to build_opener.

The default openers have normal handlers: ProxyHandler, UnknownHandler, HTTPHandler, HTTPDefaultErrorHandler, HTTPRedirectHandler, FTPHandler, FileHandler, and HTTPErrorProcessor.

The top_level_url in the code can be a complete URL (including "http:" and the host name and the optional port number ).


For example, http://example.com /.

It can also be an "authority" (that is, the host name and the optional include port number ).

For example, "example.com" or "example.com: 8080 ".

The latter contains the port number.

The above is the [Python] web crawler (4): Introduction of Opener and Handler and the content of the instance application. For more information, see The PHP Chinese website (www.php1.cn )!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.