0 Basic Write Python crawler crawl embarrassing encyclopedia code share

Source: Internet
Author: User
Project content:

A web crawler of embarrassing encyclopedia written in Python.

How to use:

After you create a new bug.py file, and then copy the code inside, double-click Run.

Program function:

Browse the command prompt for embarrassing Wikipedia.

Explanation of principle:

First, let's go through the homepage of the Embarrassing Encyclopedia: HTTP://WWW.QIUSHIBAIKE.COM/HOT/PAGE/1
As you can see, the number behind page/in the link is the corresponding page number, keeping this in mind for future preparation.
Then, right-click to view the source page:

observed that each of the pieces with a div tag, where class must be Content,title is the posting time, we only need to use regular expressions to "buckle" out on it.
After understanding the principle, the rest is the content of the regular expression, you can refer to this article:
Http://www.bitsCN.com/article/57150.htm

Operating effect:


The code is as follows:


#-*-Coding:utf-8-*-

Import Urllib2
Import Urllib
Import re
Import Thread
Import time
#-----------loading deal with embarrassing things encyclopedia-----------
Class Spider_model:

def __init__ (self):
Self.page = 1
Self.pages = []
Self.enable = False

# All the jokes are deducted, added to the list and returned to the list
def getpage (self,page):
Myurl = "http://m.qiushibaike.com/hot/page/" + page
User_agent = ' mozilla/4.0 (compatible; MSIE 5.5; Windows NT) '
headers = {' User-agent ': user_agent}
req = Urllib2. Request (myurl, headers = headers)
Myresponse = Urllib2.urlopen (req)
MyPage = Myresponse.read ()
#encode的作用是将unicode编码转换成其他编码的字符串
#decode的作用是将其他编码的字符串转换成unicode编码
Unicodepage = Mypage.decode ("Utf-8")

# Find all class= "content" div tags
#re. S is any matching pattern, that is. can match line break
MyItems = Re.findall (' (. *?) ', Unicodepage,re. S
Items = []
For item in myitems:
# The first of the item is the title of the Div, which is the time
# The second of the item is the content of the Div, which is the content
Items.append ([Item[0].replace ("\ n", ""), Item[1].replace ("\ n", "")])
return items

# for loading new jokes
def LoadPage (self):
# Keep running if the user does not enter quit
While self.enable:
# If the contents of the pages array are less than 2
If Len (Self.pages) < 2:
Try
# get new pages in the jokes
MyPage = self. GetPage (str (self.page))
Self.page + = 1
Self.pages.append (MyPage)
Except
print ' Can't link embarrassing encyclopedia! '
Else
Time.sleep (1)

def showpage (self,nowpage,page):
For items in Nowpage:
Print U ' page%d '% pages, items[0], items[1]
Myinput = Raw_input ()
if myinput = = "Quit":
Self.enable = False
Break

def Start (self):
Self.enable = True
page = Self.page

Print U ' is loading please wait ... '

# Create a new thread to load the satin in the background and store
Thread.start_new_thread (self. LoadPage, ())

#-----------loading deal with embarrassing things encyclopedia-----------
While self.enable:
# If the Self's page array contains elements
If Self.pages:
Nowpage = Self.pages[0]
Del Self.pages[0]
Self. ShowPage (Nowpage,page)
Page + = 1

#-----------The entrance to the program-----------
Print U "" "
---------------------------------------
Program: Embarrassing Reptile
Version: 0.3
Author: why
Date: 2014-06-03
Language: Python 2.7
Action: Enter quit to read embarrassing Wikipedia
Function: Press ENTER to browse today's embarrassing hot spot
---------------------------------------
"""
Print U ' Please press ENTER to view today's embarrassing content: '
Raw_input (")
MyModel = Spider_model ()
Mymodel.start ()


Q&a:
1. Why is it not available for a period of time to show embarrassing encyclopedia?
A: The previous time because the embarrassing encyclopedia added the header of the test, resulting in the inability to crawl, you need to simulate the header in the code. Now that the code has been modified, it can be used normally.

2. Why do I need to create a separate thread?
A: The basic process is this: the crawler in the background a new thread, has been crawling two pages of the embarrassing encyclopedia, if the remaining less than two pages, then crawl another page. Users press ENTER to get the latest content from the inventory instead of getting online, so browsing is smoother. It is also possible to put the load on the main thread, but this can lead to an excessive wait time during the crawl.

  • Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.