[Python]View plain copy <span style= "FONT-SIZE:18PX;" > Application Example: #coding: utf-8 import urllib2 request = Urllib2. Request (' Http://blog.csdn.net/nevasun ') #在请求加上头信息, disguised as a browser to access Request.add_header (' user-agent ', ' mozilla/5.0 (Windows; U Windows NT 6.1; En-us; rv:1.9.1.6) gecko/20091201 firefox/3.5.6 ') opener = Urllib2.build_opener () f= opener.open (Request) print f.read () . Decode (' Utf-8 ') </span>
Already have the idea to write down the daily traffic of the blog, just now apply for GAE application, and began to learn python, just take this practicing. You plan to use Python to keep your Access records local, and you can deploy to Gae once you are familiar, and use the cron provided by Gae to access traffic more closely every day. OK, Start ~
The first is a simple web crawler:
[Python] Import sys, URLLIB2
req = Urllib2. Request ("Http://blog.csdn.net/nevasun")
FD = Urllib2.urlopen (req)
While True:
data = Fd.read (1024)
If not Len (data):
Break
Sys.stdout.write (data)
Run prompt urllib2 in terminal. Httperror:http Error 403:forbidden, what's going on.
This is because the site prohibits crawler, can be in the request to add header information, disguised as browser access. Add and modify:
[Python] headers = {' user-agent ': ' mozilla/5.0 (Windows; U Windows NT 6.1; En-us; rv:1.9.1.6) gecko/20091201 firefox/3.5.6 '}
req = Urllib2. Request ("Http://blog.csdn.net/nevasun", Headers=headers)
Try again, HTTP Error 403 is gone, but the Chinese are all garbled. What's going on.
This is because the site is UTF-8 encoded and needs to be converted to a cost-coded format for the system:
Import SYS, URLLIB2
headers = {' user-agent ': ' mozilla/5.0 (Windows; U Windows NT 6.1; En-us; rv:1.9.1.6) gecko/20091201 firefox/3.5.6 '}
req = Urllib2. Request ("Http://blog.csdn.net/nevasun", Headers=headers)
Content = Urllib2.urlopen (req). Read () # UTF-8
Type = sys.getfilesystemencoding () # Local encode format
Print Content.decode ("UTF-8"). Encode (type) # convert encode format
Import SYS, URLLIB2
OK, you are done, you can crawl the Chinese page. The next step is to make a simple application on Gae.