Learn the next Python, read a simple web crawler:http://www.cnblogs.com/fnng/p/3576154.html
Self-realization of a simple web crawler, to obtain the latest information on the film.
The crawler mainly obtains the page, then parses the page, parses the information needed for further analysis and excavation.
The first thing you need to learn about Python's regular expression:http://www.cnblogs.com/fnng/archive/2013/05/20/3089816.html
The Analytic url:http://movie.douban.com/
View the source code of the Web page and analyze where to parse it:
Get resource information:
1. Movie Pictures
2. Movie Title
3. Movie Ratings
4. Movie Ticket Information
The fetch results are:
The Python implementation code is:
#!/usr/bin/env python#Coding=utf-8ImportUrllibImportUrllib2ImportReImportPymongodefgethtml (URL): page=urllib2.urlopen (URL) HTML=Page.read () page.close ( )returnHTMLdefgetcontent (HTML): Reg=r'<li class= "poster" >.+?src= "(. +?\.jpg)". +?</li>.+?class= "title". +?
Class= "" > (. +?) </a>.+?class= "rating". +?class= "Subject-rate" > (. +?) </span>.+?<a onclick= ". +?" > (. +?) </a>'Contentre=Re.compile (reg,re. Dotall) ContentList=contentre.findall (HTML)returnContentListdefGetconnection ():#get the database connectionConn=pymongo. Connection ('localhost', 27017) returnConndefSavetodb (ContentList):#stored in a MongoDB databaseconn=getconnection () DB=conn.db T_movie=Db.t_movie forContentinchContentlist:value=dict (poster=content[0],title=content[1],rating=content[2],ticket_btn=content[3]) T_movie.save (value)defdisplay (contentlist): forContentinchContentList:#values=dict (poster=content[0],title=content[1],rating=content[2],ticket_btn=content[3]) Print 'Poster','\ t', Content[0]Print 'title','\ t', content[1] Print 'rating','\ t', content[2] Print 'ticket_btn','\ t', content[3] Print'..............................................................................'if __name__=="__main__": URL="http://movie.douban.com/"HTML=gethtml (URL)#Print HTMLcontentlist=getcontent (HTML)PrintLen (contentlist)#Print ContentListdisplay (ContentList) savetodb (contentlist)Print "finished"
To this, a simple web crawler successfully completed, is not very simple ~ ~
A simple web crawler implemented by Python