Python crawler-crawl embarrassing encyclopedia jokes

Source: Internet
Author: User

No matter what, learn Python crawler.

Before the official learning of reptiles, the simple learning of HTML and CSS, understand the basic structure of the Web page, more quickly get started.

1. Get the embarrassing Wikipedia URL

Http://www.qiushibaike.com/hot/page/2/End 2 refers to page 2nd

2. Crawl the HTML page first

Import Urllib Import Urllib2 Import  = 2'http://www.qiushibaike.com/hot/page/' + str (page)  # URL corresponding to page 2nd= urllib2. Request (URL)  = Urllib2.urlopen (request)  #收到回应

Of course, there may be error: mainly Httperror and Urlerror.

The causes of Urlerror may be:

    • Network is not connected, that is, the computer cannot surf the internet
    • Cannot connect to a specific server
    • Server does not exist

Exception trapping Workaround:

Import= urllib2. Request ('http://www.xxxxx.com')try:    urllib2.urlopen ( Request)except  urllib2. Urlerror, E:    print  e.reason

Httperror is a subclass of Urlerror, when a request is made with the Urlopen method, a Reply object response on the server, which contains a number "status code". For example, if response is a "redirect", you need to navigate to a different address to get the document, and URLLIB2 will handle it. Common Status Codes:

200: Request Successful processing: Get the content of the response, processing

202: The request is accepted, but processing has not completed processing: blocking wait

204: The server has implemented the request, but no new information is returned.    If the customer is a user agent, you do not need to update your own document view for this. Processing mode: Discard

404: No Processing found: Discard

500: The server internal error server encountered an unexpected condition that caused it to be unable to complete the processing of the request. In general, this problem occurs when the source code on the server side is wrong.

Exception trapping Workaround:

Import= urllib2. Request ('http://blog.csdn.net/cqcre')try:    urllib2.urlopen (req) except Urllib2. Httperror, E:    print  e.code    print E.reason

Note: Httperror is a subclass of Urlerror, which also triggers the generation of Httperror when Urlerror is generated. Therefore, the httperror should be processed first. The above code can be rewritten as:

Importurllib2 req= Urllib2. Request ('Http://blog.csdn.net/cqcre')Try: Urllib2.urlopen (req)exceptUrllib2. Httperror, E:PrintE.codeexceptUrllib2. Urlerror, E:PrintE.reasonElse:    Print "OK"

If you cannot get a response, you may need to join the header simulation browser to make the request:

ImportUrllibImporturllib2 Page= 1URL='http://www.qiushibaike.com/hot/page/'+Str (page) user_agent='mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'Headers= {'user-agent': User_agent}Try: Request= Urllib2. Request (url,headers =headers) # Join Header response=Urllib2.urlopen (Request)PrintResponse.read ()exceptUrllib2. Urlerror, E:ifHasattr (E,"Code"):        PrintE.codeifHasattr (E,"reason"):        PrintE.reason

3. Analysis page Get the Satin

As shown, the red tick is a different kind of satin, each of which is <div class= "article block untagged mb15" id= "..." >...</div> wrap it up. Let's open one of the three messages for the user name, the content of the pieces and the number of likes. These three messages are circled with red, blue, and black underlines, respectively. The parsing process is mainly implemented by regular expressions.

    1. Resolves the user name. The regular expression is:<div class= "author Clearfix" >.*?
    2. parse the content of the satin. The regular expression is:<div.*?span> (. *?) </span> in the same vein, the text part is between <span> and </span>. All symbols (including newline characters) between <div .........span> are resolved with. *.
    3. resolves the number of likes. The regular expression is:<div class= "stats" >.*? " Number "> (. *?) </i>. Use (. *?) Instead of 1520.

Regular expression Interpretation: (Refer to Cia Qingcai blog)

1). *? is a fixed collocation,. and * representatives can match any infinite number of characters, plus? It means matching with a non-greedy pattern, that is, we'll make the match as short as possible, and we'll use it a lot later. The match.

2) (. *?) Represents a grouping in which we match five groupings in this regular expression, and in the subsequent traversal of item, Item[0] represents the first (. *?). The content of the reference, Item[1] represents the second (. *?) The content of the reference, and so on.

3) Re. The S-flag represents the point at which the matching pattern of the points is arbitrary. You can also represent line breaks.

Content = Response.read (). Decode ('utf-8')   = Re.compile ('< Div class= "Author Clearfix" >.*?'= Re.findall (pattern,content)  # Refer to the RE module in Python, The function is to find a string that matches the pattern in the content, that is, the satin

But there is a problem, the above expression will have a picture and no picture of the pieces are crawled down, but in the picture is generally not displayed, so you need to remove the picture of the jokes, only to crawl the picture-free jokes. You need to change the regular expression slightly.

is a satin HTML code without diagrams, is the HTML code that has a picture of the satin:

The Red line <div class= "thumb" > contains the picture section, and this statement does not exist in the HTML without the graphics, so use the "img" (underscore) in this statement to filter the jokes. Also notice that this statement is in the middle of the content and the number of likes.

So add one (. *) between the two regular statements of the content and the likes of the jokes. Then, as long as the inclusion of "IMG" is detected, it is filtered out.

Content = Response.read (). Decode ('Utf-8') Pattern= Re.compile ('<div class= "author Clearfix" >.*?(. *?) <div class= "stats" >.*? " Number "> (. *?) </i>' # Note this (. *?), Re. S) Items=Re.findall (pattern,content) # items is a string (HTML string) that is filtered according to the regular expression forIteminchitems:haveimg= Re.search ("img", item[2] # 0,1,2,3, respectively, the user name, the content of the pieces, pictures, likes number. So use item[2] to detect the filterif  nothaveimg:PrintItem[0], item[1], item[3]

OK, the above code is to be able to implement a page without all the drawings crawled out: code:

ImportUrllibImportUrllib2ImportRepage= 2URL='http://www.qiushibaike.com/hot/page/'+Str (page) user_agent='mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'Headers= {'user-agent': User_agent}request= Urllib2. Request (URL, headers=headers) Response=Urllib2.urlopen (Request) Content= Response.read (). Decode ('Utf-8') Pattern= Re.compile ('<div class= "author Clearfix" >.*?', Re. S) Items=Re.findall (pattern,content) forIteminchitems:haveimg= Re.search ("img", item[2])    if  nothaveimg:PrintItem[0], item[1], item[3]

4. The above code is the core, but slightly shabby, a little patching:

#Coding:utf-8ImportUrllibImportUrllib2ImportReclassSPIDER_QSBK:def __init__(self): Self.page_index= 2self.enable=False self.stories=[] self.user_agent='mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'self.headers= {'user-agent': Self.user_agent}defgetpage (Self, page_index): URL='http://www.qiushibaike.com/hot/page/'+Str (page_index)Try: Request= Urllib2. Request (URL, headers=self.headers) Response=Urllib2.urlopen (Request) Content= Response.read (). Decode ('Utf-8')            returncontentexceptUrllib2. Urlerror, E:PrintE.reasonreturnNonedefgetstories (self,page_index): Content=self.getpage (page_index) pattern= Re.compile ('<div class= "author Clearfix" >.*?', Re. S) Items=Re.findall (pattern,content) forIteminchitems:haveimg= Re.search ("img", item[2])            if  nothaveImg:self.stories.append ([item[0], item[1], item[3]])        returnself.storiesdefshowstories (Self, Page_index): Self.getstories (Page_index) forStinchself.stories:PrintU"page%d \ t publisher:%s\t:%s\n%s"% (Page_index, st[0], st[2], st[1])        delself.storiesdefStart (self): self.enable=True#While self.enable:Self . Showstories (self.page_index) Self.page_index+ = 1Spider=SPIDER_QSBK () Spider.start ( )

As a result:

Python crawler-crawl embarrassing encyclopedia jokes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.