Pytho Web page class capacity crawl

Source: Internet
Author: User

Before using the cloud collection of Baidu, think this function is very magical, no matter what kind of Web page can always accurately grab the body part. Shortly before I saw Python doing web content crawling. It's pretty easy to implement.

Directly on the code:

1 #-*-coding:utf-8-*-2 #!/usr/bin/env python3 4 #Modified 2016-07-045 6 ImportSYS7 Reload (SYS)8Sys.setdefaultencoding ("Utf-8" )9 Ten ImportRequests One ImportBS4 A Import Time - ImportRandom -  the #========================= Global Variables ========================= -  -FileName ="8480.txt" -  +Headlink ="http://www.81zw.com/book/8480/" -Next_href ="http://www.81zw.com/book/8480/655310.html" +  A  at #========================= Test ========================= -  - ## SET Test flag - #Test_flag = True -  - ## Get Contents in #response = Requests.get (next_href) - #if Response.status_code = = Requests.codes.ok: to #soup = bs4. BeautifulSoup (response.content, "Html.parser") + #Else: - #Test_flag = False the #if Test_flag: * ## test for Next link $ #link_div = Soup.find_all (' div ', class_= ' bottem1 ')Panax Notoginseng #Next_link = Link_div[0].find_all (' a ') [2] - #print "----------Next Link:----------" the #print next_link.get (' href ') +  A ## Find Contents the #contents = soup.find_all (' div ', id = ' content ') + #print "----------Contents:----------" - #Print contents[0].text.replace (U ' \xa0 ', ') $  $ ## Find Title - #h1_title = Soup.find_all (' h1 ') - #print "----------Title:-------------" the #Print H1_title[0].text - Wuyi  the  - #========================= Get Contents ========================= WuMaxloop = 2600 -Error_flag =0 AboutMaxretrytimes = 20 $  - #Create null file -f = open (FileName,'W') - f.close () A  +  whileError_flag==0 andMaxloop >0: theMaxloop = MaxLoop-1 -  $     #get Web content by URL link address theRetrytimes =0 the      whileTrue: theResponse =requests.get (next_href) the          -         ifResponse.status_code = =Requests.codes.ok: inSoup = bs4. BeautifulSoup (Response.content,"Html.parser" ) the              Break  the         Else : AboutR = Random.random () the Time.sleep (R) theRetrytimes = retrytimes + 1 the             PrintU"Try%d times"%Retrytimes +  -     #Get Next link theLink_div = Soup.find_all ('Div', class_='bottem1')BayiNext_link = Link_div[0].find_all ('a') [2] the  theContents = Soup.find_all ('Div', id ='content') -  -H1_title = Soup.find_all ('H1') the     #   theChapter_contents ="\ n"+h1_title[0].text +"\ n"+ Contents[0].text.replace (U'\xa0',' ') the  thef = open (FileName,'a') - f.write (chapter_contents) the f.close () the  the     #Get Next link address94Next_href = Next_link.get ('href') theNPos = Next_href.find ("/ http") the     ifNPos = =-1 : theNext_href = Headlink +Next_href98     elifNPos = =0: About         Pass -     Else :101Error_flag = 1102     PrintNext_href

Take a novel as an example to test, to crawl the page in the title of the article, the text, the next page link.

The part of the intermediate comment is used as a test to see if the content of the page can be crawled correctly, and the following section captures the contents of the page and saves it to a txt text file.

Although this is a bit less intelligent, every crawl of an article to analyze their own once, to ensure that the title, the text and the next page of the link can be correctly caught. But in general the use is relatively simple, for lengthy article crawl is still very useful.

In the current code, only the text part of the article is saved, for the picture part, now do not know how to handle, and then try again.

The requests and BS4 two libraries are used to obtain HTML documents and parse HTML, which is very convenient to use.

Pytho Web page class capacity crawl

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.