Recently made a small crawler with Python, you can automatically organize some of the content of the site, pushed to local documents, do a small summary.
The main Lib is urllib and BeautifulSoup.
Urllib and Urllib2 are very handy web page extraction libraries, the core is to send a variety of custom URL request, and then return to the Web page content. The simplest function to determine whether a Web page exists:
def isurlexists (URL): req = urllib2. Request (URL, headers=headers) try: urllib2.urlopen (req) except: return 0; return 1;
The headers can be customized or left blank. The main purpose of the customization is to mimic the header of the General browser, bypassing some websites blocking the crawler.
If you want to get site content and get content that returns an exception, you can:
def fetchlink (URL): req = urllib2. Request (URL, headers=headers) try: response = Urllib2.urlopen (req) except URLLIB2. Urlerror, E: print ' Got URL Error while retrieving: ', Url, ' and the exception is: ', E.reason except Urllib2. Httperror, E: print ' Got Http Error while retrieving: ', url, ' with reponse code: ', E.getcode (), ' and exception : ', E.reason else: htmlcontent = Response.read () return htmlcontent
The above code returns HTML directly.
BeautifulSoup (documentaion:http://www.crummy.com/software/beautifulsoup/bs4/doc/) is a concise library of HTML analysis tools. Once you get the HTML, you can get the information you want directly from the regular expression that comes with Python, but it's a little cumbersome. Beautifulshop directly parses HTML in a JSON-like way, forming a standard tree structure that can be manipulated directly to get elements. In addition, the search for elements is supported and so on.
Content = BS4. BeautifulSoup (content,from_encoding= ' GB18030 ') posts = Content.find_all (class_= ' post-content ') for post in Posts: PostText = Post.find (class_= ' Entry-title '). Get_text ()
In this example, the content is first converted to the BS4 object, then all class=post-content chunks are found, and then the class=entry-title text is obtained. Note that the first line of parse can be selected encoding, which is used in Simplified Chinese.
The above is the acquisition of HTML text content. If there is a picture, the above code will directly generate the picture to connect to the original picture position. If you need to do any download, you can use the Urlretrieve method. There's not much to say here.
Python cralwer (crawler) Experience