[Python] web crawler (eight): Embarrassing Encyclopedia of web crawler (v0.3) source code and resolution (simplified update) __python

Source: Internet
Author: User

http://blog.csdn.net/pleasecallmewhy/article/details/8932310

Q&a:

1. Why a period of time to show that the encyclopedia is not available.

A : some time ago because of the scandal encyclopedia added header test, resulting in the inability to crawl, need to simulate header in code. Now the code has been modified to work properly.


2. Why you need to create a separate thread.

A: The basic process is this: the crawler in the background of a new thread, has climbed the two pages of the embarrassing encyclopedia, if the remaining less than two pages, then climb another page. The user presses the carriage return only to obtain the newest content from the stock, but does not obtain the Internet, therefore the browsing is smoother. You can also put the load on the main thread, but this can cause the problem of too long waiting time during the crawl.





Project content:

A web crawler in the Encyclopedia of embarrassing things written in Python.

How to use:

Create a new bug.py file, and then copy the code into it, and then double-click to run it.

program function:

Browse the embarrassing encyclopedia in the command prompt line.

Principle Explanation:

First, take a look at the home page of the embarrassing encyclopedia: HTTP://WWW.QIUSHIBAIKE.COM/HOT/PAGE/1

As can be seen, the link in the page/after the number is the corresponding page number, remember this for future preparation.

Then, right click to view the page source:

Observation found that every joke with a DIV tag, which class must be Content,title is the post time, we only need to use regular expressions to "buckle" out on it.

After understanding the principle, the rest is the content of the regular expression, you can refer to this blog post:

http://blog.csdn.net/wxg694175346/article/details/8929576


Operation effect:



[Python] View Plain Copy # -*- coding: utf-8 -*-           import urllib2     import urllib     import re      import thread     import time              #-----------  load handling embarrassing encyclopedia  -----------     class  spider_model:                   Def __init__ (self):             self.page  = 1             self.pages = [ ]             self.enable = False               #  All the jokes are deducted, added to the list and returned to the list     &nbsP     def getpage (self,page):              myUrl =  "http://m.qiushibaike.com/hot/page/"  + page              user_agent =  ' mozilla/4.0  (compatible; &NBSP;MSIE&NBSP;5.5;&NBSP;WINDOWS&NBSP;NT) '              headers = {  ' user-agent '  : user_agent }             req = urllib2. Request (myurl, headers = headers)              myresponse = urllib2.urlopen (req)             Mypage = myresponse.read ()              # The role of encode is to convert Unicode encoding to other encoded strings      &NBSP;&NBSP;&NBSP;&Nbsp;     #decode的作用是将其他编码的字符串转换成unicode编码               unicodepage = mypage.decode ("Utf-8")                    #  Find all class= "content" div tags                #re. S is any matching pattern, which is. Can match line breaks              myitems =  re.findall (' <div.*?class= "content". *?title= "(. *?)" > (. *?) </div> ', Unicodepage,re. S)              items = []              for item in myItems:                  # item  The first is the title of Div, time              &nbThe second of the sp;   # item  is the content of the Div, which is content                   items.append ([Item[0].replace ("\ n", ""), Item[1].replace ("\ n", "")])              return items               #  for loading new jokes           def loadpage (self):             #   Run all the time if the user does not enter quit              while  self.enable:                 #   If the contents of the pages array are less than 2                   if len (self.pages)  < 2:                      try:                          #  get new pages of the jokes                            mypage = self. GetPage (str (self.page))                           self.page += 1                          self.pages.append (myPage)                       except:                          print  ' Can't link the embarrassing encyclopedia. '                  else:                       time.sleep (1)                        def showpage (self,nowpage,page):              for items in nowPage:                  print u ' page%d '  % page , items[0]   , items[1]                  myinput = raw_input ()                   if myInput ==  "Quit": &NBSP;&NBSP;&NBSP;&NBSp                 self.enable  = False                      break                       def start (self):              self.enable = True              page = self.page            Please wait while        print u ' is loading ... '                            #   Create a new thread to load the scripts in the background and store              thread.start_new _thread (self. LoadpAge, ())                            #-----------  Load handling embarrassing encyclopedia  -----------             while self.enable:                  #  if the page array of self contains elements                  if self.pages:                      nowpage  = self.pages[0]                      del self.pages[0]                      self. ShowPage (nowpage,page)                     page +=  1               #-----------  program Entrance  - ----------     print u ""   ---------------------------------------        Program: Embarrassing hundred crawler        version:0.3       author:why        Date:2014-06-03       language:python 2.7        Operation: Input quit exit reading embarrassing encyclopedia        function: Press ENTER to browse today's embarrassing hot spot   --------------------------- ------------   ""              print  U ' please press ENTER to browse today's embarrassing content: '      raw_input ('   ')      mymodel =  Spider_model ()      Mymodel.start ()    

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.