[Python] web crawler (ix): Baidu posted web crawler (v0.4) source and analysis __python

Source: Internet
Author: User

http://blog.csdn.net/pleasecallmewhy/article/details/8934726

Update: Thanks to the comments of friends in the reminder, Baidu Bar has now been changed to Utf-8 code, it is necessary to decode (' GBK ') to decode (' Utf-8 ').

Baidu Bar Crawler production and embarrassing hundred crawler production principle is basically the same, are through the View Source button key data, and then store it to the local TXT file.

Source Download:

http://download.csdn.net/detail/wxg694175346/6925583
Project content:

Use Python to write the web crawler Baidu Bar.

How to use:

Create a new bugbaidu.py file, and then copy the code into it, and then double-click to run it.

program function:

Post the content of the landlord posted in packaging txt stored to local.

Principle Explanation:

First, first glance at a bar, click on the landlord and click on the second page after a little change in the URL, has become:

Http://tieba.baidu.com/p/2296712428?see_lz=1&pn=1

Can be seen, see_lz=1 is only to see the landlord, Pn=1 is the corresponding page number, remember this for future preparation. This is the URL we need to take advantage of.

The next step is to view the page source.

First of all, the problem is dug out to store files when it will be used.

You can see that Baidu uses the GBK code, the title uses the H1 mark:

[HTML] view plain copy

Similarly, the body part with div and class composite tags, the next thing to do is to use regular expressions to match.

Run Screenshots:

Generated TXT file:



[Python] View plain Copy # -*- coding: utf-8 -*-   #---------------------------------------    #    Program: Baidu paste crawler    #    version:0.5   #    Author:why   #    date:2013-05-16   #    language:python 2.7    #    operation: Enter the URL after automatically look at the landlord and save to the local file    #    function: The landlord of the contents of the release of packaging txt stored to the local.    #---------------------------------------       import string    import urllib2   import re      #-----------  processing various labels on the page  - ----------   class html_tool:       #  with non-  greedy mode   match  \t   \n  or   space   or   hyperlink   or   pic         Bgnchartononerex = re.compile ("(\t|\n| |<a.*?>|)")                #  with non-  greedy mode   match   arbitrary <> tab         endchartononerex = re.compile ("<.*?>")           #   Non-  greedy mode   match   arbitrary <p> label        BgnPartRex =  Re.compile ("<p.*?>")        chartonewlinerex = re.compile ("<br />|</p>|<tr>|<div>|</div>) ")        chartonexttabrex  = re.compile ("<td>")           #  Convert some HTML symbol entities to original symbols        replacetab = [("<", "<"), (">", ">") , ("&", "&"), ("&", "\"), (" ", " ")]               def replace_char (self,x):           x  = self. BgnchartononeRex.sub ("", X)            x = self. Bgnpartrex.sub ("\n    ", x)            x  = self. Chartonewlinerex.sub ("\ n", x)            x = self. Chartonexttabrex.sub ("T", X)            x = self. Endchartononerex.sub ("", x)               for t  in self.replaceTab:                  x = x.replace (t[0],t[1])               return x            Class baidu_ spider:       #  declaration related properties        def __init__ (Self,url):             self.myUrl = url +  '? _lz=1 '            self.datas = []            self.mytool = html_tool ()             print u ' has started the Baidu post-stick crawler, clicks clicks '              #  Initialize the load page and store its transcoding        def baidu_tieba (self):           #  read the original information of the page and transfer it from GBK             mypage = urllib2.urlopen (Self.myurl). Read (). Decode ("GBK")             #  Calculate the content of the landlord's announcement total number of pages             endpage = self.page_counter (myPage)             #  Get the title of the Post            title = self.find_title ( myPage)            print u ' article name: '  + title            #  Get the final data             self.save_data (self.myurl,title,endpage)           # Used to calculate how many pages a total        def page_counter (self,mypage):            #  matching   "Total <span class=" Red ">12</span> page"   To get a total number of pages            mymatch = re.search (R ' class= ') Red ">" (\d+?) </span> ',  mypage, re. S)            if myMatch:                  endpage&nbsP;= int (Mymatch.group (1))                 print u ' crawler report: Found landlord total%d page original content '  % endPage            else:               endpage  = 0               print  U ' Crawler report: Unable to calculate how many pages the landlord published content. '            return endPage           #  to find the title of the Post        def find_title (self,mypage):            #  matching  

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.