Python2.7 capture Douban movies top250 and python2.7top250
Use python2.7 to capture the top2501.
- Capture top100 movie names
- Print Output in sequence
2. webpage resolution
- To conduct web crawlers, it is necessary to use tools (such as browsers) to view the HTML content of webpages. I use Firefox and install the Firebug plug-in,
This plug-in allows you to conveniently view many contents including HTML.
Open the top P250 page of Douban film rankings and find 25 movies on each page. There are 10 pages in total. Each page url has the following features:
http://movie.douban.com/top250?start=0
http://movie.douban.com/top250?start=25
http://movie.douban.com/top250?start=50
http://movie.douban.com/top250?start=75
So, you only need to use the loop to process the 225, and so on.
- On the webpage, click the Chinese name of any movie and right-click "view elements" to view the HTML source code:
The movie name can be found in <span class = "title"> </span>, and the English name is also in <span class = "title"> </span>. You can use the regular expression <span class = "title"> (. *) </span> to match the Chinese name and English name of a movie. However, you only need to obtain the Chinese name. Therefore, you must filter the English name. The filter method can be implemented using the find (str, pos_start, pos_end) function to remove the features unique to English names: ''and '/'. For details, see the code. 3. Code Implementation
The code here is relatively simple, so you don't need to define a function.
1 #! /Usr/bin/python 2 #-*-coding: UTF-8-*-#3 import requests, sys, re 4 from bs4 import BeautifulSoup 5 6 reload (sys) 7 sys. setdefaultencoding ('utf-8') 8 print 'is capturing data from the Douban film Top250 ...... '9 10 for page in range (10): 11 url = 'https: // movie.douban.com/top250? Start = '+ str (page-1) * 25) 12 print' --------------------------- crawling the '+ str (page + 1) +' page ...... -------------------------------- '13 html = requests. get (url) 14 html. raise_for_status () 15 try: 16 soup = BeautifulSoup (html. text, 'html. parser ') 17 soup = str (soup) # use a regular expression to convert the webpage text to a string 18 title = re. compile (R' <span class = "title"> (. *) </span> ') 19 names = re. findall (title, soup) 20 for name in names: 21 if name. find ('') =-1 and Name. find ('/') =-1: # Remove English names (English names contain ''and '/') 22 print name23 # create a name, score 24 hours t Exception as e: 25 print e26 print 'crawled! '
Due to the limited level, it is inevitable that there will be deficiencies. Thank you!