Get The returned page with the user-agent information , otherwise it will throw an "HTTP Error 403:forbidden" Exception .
Because some websites to prevent this kind of access without user-agent information, will verify the request information in the UserAgent(its information including hardware platform, system software, application software and user's personal preferences), If useragent exists or does not exist, then this request will be rejected.
#coding =utf-8import urllib2import re# use python2.7def gethtml (url,user_agent= "Wswp", num_retries=2): #下载网页, If download fails re-download two times print ' Start download page: ', urlheaders = {' user-agent ': user_agent} #headers = {# ' user-agent ': ' mozilla/5.0 ( Windows NT 6.1; rv:24.0) gecko/20100101 firefox/24.0 ', # ' cookie ': Cookie#}request = urllib2. Request (Url,headers=headers) Try: HTML = urllib2.urlopen (Request). Read () #GET请求except urllib2. Urlerror as E:print "Download failed:", e.reasonhtml = Noneif num_retries > 0:if hasattr (E, ' Code ') and <= E.code < 600:re Turn gethtml (url,num_retries-1) return htmlif __name__ = ' __main__ ': html = gethtml ("http://www.baidu.com") print Htmlprint "End"
...
Python crawler Learning--Get web page