Python crawler Example (ii) grab the live fish platform data using selenium

Source: Internet
Author: User

Program Description: Grab the live room number of the Betta live platform and the number of spectators, and finally count the total number of people and the total number of spectators at a given moment.

Process Analysis:

First, enter the fighting fish home http://www.douyu.com/directory/all

Enter the platform homepage, to the bottom of the page to click on the next page, found that the URL address has not changed, so that the use of URLLIB2 send requests will not get full data, then we can use selenium and PHANTOMJS to simulate the browser click on the next page, This allows the full response data to be obtained.

First check the next page element, as follows:

<href= "#"  class= "Shark-pager-next"> Next < /a>

Using selenium and PHANTOMJS to simulate clicks, the code is as follows:

 fromSeleniumImportWebdriver#To create a browser object using the Phantomjs browserDriver =Webdriver. PHANTOMJS ()#loading a page using the Get methodDriver.get ("Https://www.douyu.com/directory/all")#class= "Shark-pager-next" is the next page button, click () is the analog tapDriver.find_element_by_class_name ("Shark-pager-next"). Click ()#Print page Page viewPrintDriver.page_source

This allows you to set a loop condition and keep clicking until the page has finished loading. The condition for the loop termination is the last page, go to the last page, and check the next page element:

<href= "#"  class= "Shark-pager-next shark-pager-disable Shark-pager-disable-next "> next </a>

Comparison found: When the "Shark-pager-disable-next", the next page cannot be clicked, this is the condition of the loop stop.

# if the "Shark-pager-disable-next"is not found in the page source code, its return value is-1, can be evaluated in turn as a condition driver.page_source.find (" Shark-pager-disable-next ")

Second, to analyze the content to be extracted

View the elements of the live room name to get the following results:

<class= "Dy-name ellipsis fl">ai Winter ming</  Span>

Use Beatuifulsoup to get the element with the following code:

# room name names = Soup.find_all ("span", {"class""  dy-name ellipsis fl"})

View the elements of the viewing number and get the following results:

<class= "Dy-num fr">756,000 </span >

Use Beatuifulsoup to get the element with the following code:

# number of spectators numbers = Soup.find_all("span", {"class" :"  dy-num fr"})

The process is probably the case, here is the complete code:

#!/usr/bin/env python#-*-coding:utf-8-*- fromSeleniumImportWebdriver fromBs4ImportBeautifulSoup as BSImportsysreload (SYS) sys.setdefaultencoding ("Utf-8")classDouyu ():def __init__(self): Self.driver=Webdriver. PHANTOMJS () Self.num=0 Self.count=0defDouyuspider (self): Self.driver.get ("Https://www.douyu.com/directory/all")         whileTrue:soup= BS (Self.driver.page_source,"lxml")            #room name, return listNames = Soup.find_all ("span", {"class":"Dy-name ellipsis FL"})            #number of visitors, return to listNumbers = Soup.find_all ("span", {"class":"Dy-num FR"})             forName, numberinchzip (names, numbers):PrintU"Number of Visitors:-"+ Number.get_text (). Strip () + U"-\t room Name:"+Name.get_text (). Strip () Self.num+ = 1Count=Number.get_text (). Strip ()ifcount[-1]=="million": Countnum= Float (count[:-1]) *10000Else: Countnum=Float (count) self.count+=Countnum#always click on the next pageSelf.driver.find_element_by_class_name ("Shark-pager-next"). Click ()#If you find the "next page" as a hidden tag in the source page, exit the loop            ifSelf.driver.page_source.find ("Shark-pager-disable-next")! =-1:                     Break            Print "Current number of live sites:%s"%Self.numPrint "current site audience:%s"%Self.countif __name__=="__main__": D=Douyu () d.douyuspider ( )

Run results (partial display):

Python crawler Example (ii) grab the live fish platform data using selenium

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.