[Python] from the Watercress movie batch get a list of users who have seen this movie

Source: Internet
Author: User

Objective

Because later to do an experiment, the need to use a large number of watercress users of the film data, so think of the "read the film's Watercress members" page from the Watercress film to get more active users of the watercress film.

Link analysis

Here is 看过"模仿游戏"的豆瓣成员 the link to the page: http://movie.douban.com/subject/10463953/collections .

A page shows 20 users who have watched the film. When you click on the next page, the current connection becomes: http://movie.douban.com/subject/10463953/collections?start=20 .

So, when the next page of content is requested, it is actually adding 20 to the index after "start".

Therefore, we can set, in the base_url=‘http://movie.douban.com/subject/10463953/collections?start=‘ i=range(0,200,20) loop url=base_url+str(i) .

I was set to the maximum value of 180, because later tested, watercress only to see a movie of the last 200 users.

Reading Web pages

I set up an HTTP proxy at the time of the visit, and in order to prevent the frequency of access too fast to be the bean-sealed IP, each read a webpage will call time.sleep(5) wait 5 seconds. It's okay to do something else while the program is running.

Web page parsing

This time using the BeautifulSoup library to parse the HTML.
Each user information is like this in HTML:

  <Tablewidth= "100%"class="">  <TR>      <TDwidth= "a"valign= "Top">          <ahref= "http://movie.douban.com/people/46770381/">              <imgclass=""src= "Http://img4.douban.com/icon/u46770381-16.jpg"alt= "July" />          </a>      </TD>      <TDvalign= "Top">          <Divclass= "PL2">              <ahref= "http://movie.douban.com/people/46770381/"class="">July<spanstyle= "FONT-SIZE:12PX;">Yinchuan</span>              </a>          </Div>          <Pclass= "PL">2015-08-23&nbsp;<spanclass= "Allstar40"title= "Recommended"></span>          </P>      </TD>  </TR>  </Table>

First, the HTML initialized with the read soup=BeautifulSoup(html) . The information needed is just the user ID and the user's movie homepage, so the Really useful information is in this code:

  <TDwidth= "a"valign= "Top">      <ahref= "http://movie.douban.com/people/46770381/">          <imgclass=""src= "Http://img4.douban.com/icon/u46770381-16.jpg"alt= "July" />      </a>  </TD>

So by td_tags=soup.findAll(‘td‘,width=‘80‘,valign=‘top‘) finding all <td width="80" valign="top"> the blocks in the Python code.

td=td_tags[0] , a=td.a you can get

 <  a  href  = "http://movie.douban.com/people/46770381/"       >  <  img  class  = ""   src  = "Http://img4.douban.com/icon/u46770381-16.jpg"   Alt  = "July"   />  </ a  >  

by link=a.get(‘href‘) getting the href attribute, you can also link to the user's movie homepage. The user ID can then be obtained through a string lookup.

Full code
1 #Coding=utf-82 ##从豆瓣网页中得到用户id3 4 ##网页地址类型: http://movie.douban.com/subject/26289144/collections?start=05 ## http://movie.douban.com/subject/26289144/collections?start=206 7  fromBeautifulSoupImportBeautifulSoup8 ImportCodecs9 Import TimeTen ImportUrllib2 One  ABaseurl='http://movie.douban.com/subject/25895276/collections?start=' -  -proxyinfo='127.0.0.1:8087' theProxysupport=urllib2. Proxyhandler ({'http':p Roxyinfo}) -Opener=Urllib2.build_opener (Proxysupport) - Urllib2.install_opener (opener) -  +  - #Save user information (ID, home page link) to a file + defSaveuserinfo (idlist,linklist): A     ifLen (idlist)! =Len (linklist): at         Print 'Error:len (idlist)!=len (linklist)!' -         return -Writefile=codecs.open ('UserIdList3.txt','a','Utf-8') -Size=Len (idlist) -      forIinchRange (size): -Writefile.write (idlist[i]+'\ t'+linklist[i]+'\ n') in writefile.close () -  to #resolves user IDs and connections from given HTML text + defParsehtmluserid (HTML): -Idlist=[]#List of returned IDs theLinklist=[]#the link list returned *  $soup=BeautifulSoup (HTML)Panax Notoginseng     ##<td width= "Top" > "valign= " -     ##<a href= "http://movie.douban.com/people/liaaaar/" > the     ## /> +     ##</a> A     ##</td> theTd_tags=soup.findall ('TD', width=' the', valign='Top') +I=0 -      forTdinchTd_tags: $         #The Top 20 users have seen the film, $         #and then just want to see the movie users, so abandon -         ifI==20: -              Break theA=td.a -Link=a.get ('href')WuyiI_start=link.find ('people/') theId=link[i_start+7:-1] - idlist.append (ID) Wu linklist.append (link) -I+=1 About     return(idlist,linklist) $  - #returns the page content of the specified number - defgethtml (num): -url=baseurl+str (num) APage=urllib2.urlopen (URL) +Html=Page.read () the     returnHTML -  $ deflaunch (): the     #Specify starting number: multiples of 20 theQues=raw_input ('Start from number? (multiples of)') thestartnum=Int (ques) the     ifStartnum%20! =0: -         Print 'Input number error!' in         return the      forIinchRange (startnum,200,20): the         Print 'Loading page%d/200 ...'% (i+1) AboutHtml=gethtml (i) the(curidlist,curlinklist) =Parsehtmluserid (HTML) the saveuserinfo (curidlist,curlinklist) the         Print 'sleeping.' +Time.sleep (5)

[Python] from the Watercress movie batch get a list of users who have seen this movie

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.