Python urllib, urllib2, and httplib Capture web page code instances

Source: Internet
Author: User
Tags throw exception

Python urllib, urllib2, and httplib Capture web page code instances

This article mainly introduces Python urllib, urllib2, and httplib to capture Web Page code instances. This article provides the demo code directly. The Code contains detailed comments. For more information, see

Use urllib2, too powerful

I tried to use a proxy to log in and pull the cookie and jump to capture the image ......

Document: http://docs.python.org/library/urllib2.html

Go directly to the demo code

Including direct pulling, using Reuqest (post/get), using proxy, cookie, and redirect processing

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

#! /Usr/bin/python

#-*-Coding: UTF-8 -*-

# Urllib2_test.py

# Author: wklken

#2012-03-17 wklken@yeah.net

 

 

Import urllib, urllib2, cookielib, socket

 

Url = "http://www.testurl..." # change yourself

# The simplest way

Def use_urllib2 ():

Try:

F = urllib2.urlopen (url, timeout = 5). read ()

Failed t urllib2.URLError, e:

Print e. reason

Print len (f)

 

# Use Request

Def get_request ():

# Timeout can be set.

Socket. setdefatimetimeout (5)

# You can add the parameter [No parameter, use get, in this way, use post]

Params = {"wd": "a", "B": "2 "}

# You can add request header information for identification

I _headers = {"User-Agent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv: 1.9.1) Gecko/20090624 Firefox/3.5 ",

"Accept": "text/plain "}

# Use post, have some params post to server, if not support, will throw exception

# Req = urllib2.Request (url, data = urllib. urlencode (params), headers = I _headers)

Req = urllib2.Request (url, headers = I _headers)

 

# After creating a request, you can add another one. If the key is duplicate, the latter takes effect.

# Request. add_header ('accept', 'application/json ')

# You can specify the submission method.

# Request. get_method = lambda: 'put'

Try:

Page = urllib2.urlopen (req)

Print len (page. read ())

# Like get

# Url_params = urllib. urlencode ({"a": "1", "B": "2 "})

# Final_url = url + "? "+ Url_params

# Print final_url

# Data = urllib2.urlopen (final_url). read ()

# Print "Method: get", len (data)

Failed t urllib2.HTTPError, e:

Print "Error Code:", e. code

Failed t urllib2.URLError, e:

Print "Error Reason:", e. reason

 

Def use_proxy ():

Enable_proxy = False

Proxy_handler = urllib2.ProxyHandler ({"http": "http://proxyurlXXXX.com: 8080 "})

Null_proxy_handler = urllib2.ProxyHandler ({})

If enable_proxy:

Opener = urllib2.build _ opener (proxy_handler, urllib2.HTTPHandler)

Else:

Opener = urllib2.build _ opener (null_proxy_handler, urllib2.HTTPHandler)

# Set the global opener of urllib2

Urllib2.install _ opener (opener)

Content = urllib2.urlopen (url). read ()

Print "proxy len:", len (content)

 

Class NoExceptionCookieProcesser (urllib2.HTTPCookieProcessor ):

Def http_error_403 (self, req, fp, code, msg, hdrs ):

Return fp

Def http_error_400 (self, req, fp, code, msg, hdrs ):

Return fp

Def http_error_500 (self, req, fp, code, msg, hdrs ):

Return fp

 

Def hand_cookie ():

Cookie = cookielib. CookieJar ()

# Cookie_handler = urllib2.HTTPCookieProcessor (cookie)

# After add error exception handler

Cookie_handler = NoExceptionCookieProcesser (cookie)

Opener = urllib2.build _ opener (cookie_handler, urllib2.HTTPHandler)

Url_login = "https://www.yourwebsite /? Login"

Params = {"username": "user", "password": "111111 "}

Opener. open (url_login, urllib. urlencode (params ))

For item in cookie:

Print item. name, item. value

# Urllib2.install _ opener (opener)

# Content = urllib2.urlopen (url). read ()

# Print len (content)

# Get the last page URL after the redirection N times

Def get_request_direct ():

Import httplib

Httplib. HTTPConnection. debuglevel = 1

Request = urllib2.Request ("http://www.google.com ")

Request. add_header ("Accept", "text/html ,*/*")

Request. add_header ("Connection", "Keep-Alive ")

Opener = urllib2.build _ opener ()

F = opener. open (request)

Print f. url

Print f. headers. dict

Print len (f. read ())

 

If _ name _ = "_ main __":

Use_urllib2 ()

Get_request ()

Get_request_direct ()

Use_proxy ()

Hand_cookie ()

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.