Python web crawler (vii): Baidu Library article crawler __python

Source: Internet
Author: User
Tags python web crawler

When you crawl the article in the Baidu Library in the previous way, you can only crawl a few pages that have been displayed, and you cannot get the content for pages that are not displayed. If you want to see the entire article completely, you need to manually click "Continue reading" below to make all the pages appear. The

looks at the element and discovers that the HTML before the expansion is different from the expanded HTML when the text content of the hidden page is not displayed. But the crawler gets the expanded HTML file, so it can only get part of the content.
This article uses a tool to automate the operation of Web pages to obtain expanded HTML. Use the Selenium automation tool to manipulate browsers Selenium installation
PIP3 install Selenium installation Chromedriver.exe
There's a lot of holes in it.
Driver Download Address:
http://chromedriver.storage.googleapis.com/index.html
Be sure to download the chromedriver that matches the chrome version. And note that not the larger the version number of the driver corresponding to the latest Chrome browser, to carefully look at the Notes.txt file to see the corresponding relationship. For example, my chrome is v62, and the support Chromedriver is v2.33. Drag Setup to the C:\Program Files (x86) \google\chrome\application\ directory to set environment variables: win+r, enter sysdm.cpl, advanced, environment variable, set path to C:\Program Files (x86) \google\chrome\application\chromedriver.exe. Or specify this path when you call Chrome.
Browser = webdriver. Chrome (' C:\Program Files (x86) \google\chrome\application\chromedriver.exe ') Uses Selenium Auto Action page:

From selenium import webdriver

options = Webdriver. Chromeoptions ()
options.add_argument (' user-agent= ') mozilla/5.0 (Linux; Android 4.0.4; Galaxy Nexus build/imm76b) applewebkit/535.19 (khtml, like Gecko) chrome/18.0.1025.133 Mobile safari/535.19 "')
Driver = Webdriver. Chrome (chrome_options=options)
driver.get (' https://www.baidu.com/')
HTML = Driver.page_source
Complete Code
# contents_bdwk.py from selenium import webdriver to BS4 import BeautifulSoup # ***selenium Automatic Action Web page * * * * options = Webdriv Er. Chromeoptions () options.add_argument (' user-agent= ' mozilla/5.0 (Windows NT 6.2; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/27.0.1453.94 safari/537.36 "') #设置设备代理 Driver = Webdriver. Chrome (chrome_options=options) driver.get (' https://wenku.baidu.com/view/aa31a84bcf84b9d528ea7a2c.html ') # Here to fill in the article address page = Driver.find_element_by_xpath ("//div[@id = ' Html-reader-go-more ']") driver.execute_script (' arguments[ 0].scrollintoview (); ', page ' #拖动网页到可见的元素去 nextpage = Driver.find_element_by_xpath ("//span[@class = ' morebtn go" Btn '] "nextpage.click () * * * * * * * * * * * OPEN HTML Analysis * * * * HTML #进行点击下一页操作 = Driver.page_source Bf1 = beautifulsoup (HTML, ' lxml ') # Get article title = Bf1.find_all (' H1 ', class_= ' reader_ab_test with-to P-banner ') BF2 = BeautifulSoup (str (title), ' lxml ') title = Bf2.find (' span ') title = Title.get_teXT () filename = title + '. txt ' # get article content texts_list = [] result = Bf1.find_all (' div ', class_= ' Ie-fix ') for Each_result in
        RESULT:BF3 = BeautifulSoup (str (each_result), ' lxml ') texts = Bf3.find_all (' P ') for Each_text in texts: Texts_list.append (each_text.string) contents = '. Join (texts_list). Replace (' \xa0 ', ') # * * * Save As. txt file with open (Filena
 Me, ' A ', encoding= ' Utf-8 ') as F:f.writelines (contents) f.write (' \ n ')

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.