Python crawler Combat-crawl embarrassing encyclopedia jokes

Source: Internet
Author: User

1. The purpose of this article is to practice web crawlers

Goal:

1 . Climb to embarrassing encyclopedia popular jokes 2 . Remove the satin with the picture 3. Get the release time of the Satin, the publisher, the content, and the number of likes.

2. First we determine the URL is HTTP://WWW.QIUSHIBAIKE.COM/HOT/PAGE/10 (you can choose freely), first constructs to see whether the success

Construction Code:

1 #-*-coding:utf-8-*-2 ImportUrllib3 ImportUrllib24 ImportRe5 6page = 107URL ='http://www.qiushibaike.com/hot/page/'+Str (page)8User_agent ='mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'9headers = {'user-agent': User_agent}Ten Try: OneRequest = Urllib2. Request (url,headers =headers) AResponse =Urllib2.urlopen (Request) -Content =Response.read () -     Printcontent the exceptUrllib2. Urlerror, E: -     ifHasattr (E,"Code"): -         PrintE.code -     ifHasattr (E,"reason"): +         PrintE.reason
View Code

Successful construction, but there is garbled situation, do not worry, we simply will:

Content == Response.read (). Decode ('UTF-8')

3. Before extracting the satin, we must, must analyze the page construction

4. Well look at the construction of the page, we can write the regular to match, the code into the following:

Pattern = Re.compile ('<div.*?class= "author.*?>.*?<a.*?</a>.*?<a.*?> (. *?) </a>.*?<div.*?class'+'= "Content" .*?> (. *?) </div> (. *?) <div class= "stats.*?class=" Number "> (. *?) </i>', Re. S) Items=Re.findall (pattern,content) forIteminchitems:haveimg= Re.search ("img", item[2])        if  nothaveimg:PrintITEM[0],ITEM[1],ITEM[2],ITEM[3]

which

(1). *? is a fixed collocation,. and * representatives can match any infinite number of characters, plus? It means matching with a non-greedy pattern, that is, we'll make the match as short as possible, and we'll use it a lot later. The collocation

(2) (. *?) Represents a grouping in which we match five groupings in this regular expression, and in the subsequent traversal of item, Item[0] represents the first (. *?). The content of the reference, Item[1] represents the second (. *?) The content of the reference, and so on

(3) Re. The S-flag represents the point at which the matching pattern of the points is arbitrary. You can also represent newline characters.

(4) img is to remove the matching picture label

Through the above experiment, get the final experiment code:

1 #-*-coding:utf-8-*-2 ImportUrllib3 ImportUrllib24 ImportRe5 6page = 107URL ='http://www.qiushibaike.com/hot/page/'+Str (page)8User_agent ='mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'9headers = {'user-agent': User_agent}Ten Try: OneRequest = Urllib2. Request (url,headers =headers) AResponse =Urllib2.urlopen (Request) -Content = Response.read (). Decode ('Utf-8') -Pattern = Re.compile ('<div.*?class= "author.*?>.*?<a.*?</a>.*?<a.*?> (. *?) </a>.*?<div.*?class'+'= "Content" .*?> (. *?) </div> (. *?) <div class= "stats.*?class=" Number "> (. *?) </i>', Re. S) theItems =Re.findall (pattern,content) -      forIteminchItems: -Haveimg = Re.search ("img", item[2]) -         if  nothaveimg: +             PrintItem[0],item[1],item[2],item[3] - exceptUrllib2. Urlerror, E: +     ifHasattr (E,"Code"): A         PrintE.code at     ifHasattr (E,"reason"): -         PrintE.reason
View Code

Reference: http://cuiqingcai.com/990.html

Python crawler Combat-crawl embarrassing encyclopedia jokes

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.