Python web crawler is very powerful, using Urllib or URLLIB2 can easily crawl Web content. But many times we should pay attention, perhaps many websites have set up the collection function, is not so easy can crawl to want content.
Today I'm going to share the download Python2 and Python3 are all ways to simulate a browser to skip the screen to crawl.
The most basic crawl:
#! /usr/bin/env python
#-*-coding=utf-8-*-
# @Author pythontab
import urllib.request
url = "http://www. . com
html = urllib.request.urlopen (URL). Read ()
print (HTML)
But... Some sites can not crawl, carried out the collection settings, so we have to change the method
Python2 (latest stable version python2.7)
#! /usr/bin/env python
#-*-coding=utf-8-*-
# @Author pythontab.com
import urllib2
url= "http:// Pythontab.com "
req_header = {' user-agent ': ' mozilla/5.0 (Windows NT 6.1) applewebkit/537.11 (khtml, like Gecko) chrome/23.0.1271.64 safari/537.11 ',
' Accept ': ' text/html;q=0.9,*/*;q=0.8 ',
' accept-charset ': ' Iso-8859-1, utf-8;q=0.7,*;q=0.3 ',
' accept-encoding ': ' gzip ',
' Connection ': ' Close ',
' Referer ': None #注意如果依然不能抓取的话 , here you can set the crawl site's host
}
req_timeout = 5
req = urllib2. Request (url,none,req_header)
resp = Urllib2.urlopen (req,none,req_timeout)
html = resp.read ()
print ( html
Python3 (latest stable version python3.3)
#! /usr/bin/env python
#-*-coding=utf-8-*-
# @Author pythontab
import urllib.request
url = "http://www. . com '
headers = {' user-agent ': ' mozilla/5.0 (Windows NT 6.1) applewebkit/537.11 (khtml, like Gecko) chrome/ 23.0.1271.64 safari/537.11 ',
' Accept ': ' text/html;q=0.9,*/*;q=0.8 ',
' accept-charset ': ' Iso-8859-1,utf-8;q =0.7,*;q=0.3 ',
' accept-encoding ': ' gzip ',
' Connection ': ' Close ',
' Referer ': None #注意如果依然不能抓取的话, Here you can set the crawl site's host
}
opener = Urllib.request.build_opener ()
opener.addheaders = [headers]
data = Opener.open (URL). Read ()
print (data)
More Wonderful content: http://www.bianceng.cnhttp://www.bianceng.cn/Programming/extra/