Project content:
A web crawler in the Encyclopedia of embarrassing things written in Python.
How to use:
Create a new bug.py file, and then copy the code into it, and then double-click to run it.
Program function:
Browse the embarrassing encyclopedia in the command prompt line.
Principle Explanation:
First, take a look at the home page of the embarrassing encyclopedia: HTTP://WWW.QIUSHIBAIKE.COM/HOT/PAGE/1
As can be seen, the link in the page/after the number is the corresponding page number, remember this for future preparation.
Then, right click to view the page source:
Observation found that every joke with a DIV tag, which class must be Content,title is the post time, we only need to use regular expressions to "buckle" out on it.
After understanding the principle, the rest is the content of the regular expression, you can refer to this article:
Http://www.jb51.net/article/57150.htm
Operation Effect:
Copy Code code as follows:
#-*-Coding:utf-8-*-
Import Urllib2
Import Urllib
Import re
Import Thread
Import time
#-----------Load deal with embarrassing encyclopedia-----------
Class Spider_model:
def __init__ (self):
Self.page = 1
Self.pages = []
Self.enable = False
# button up all the jokes, add to the list and return to the list
def getpage (self,page):
Myurl = "http://m.qiushibaike.com/hot/page/" + page
User_agent = ' mozilla/4.0 (compatible; MSIE 5.5; Windows NT) '
headers = {' User-agent ': user_agent}
req = Urllib2. Request (myurl, headers = headers)
Myresponse = Urllib2.urlopen (req)
MyPage = Myresponse.read ()
#encode的作用是将unicode编码转换成其他编码的字符串
#decode的作用是将其他编码的字符串转换成unicode编码
Unicodepage = Mypage.decode ("Utf-8")
# Find all the div tags for class= ' content '
#re. S is any matching pattern, which is. can match line breaks
MyItems = Re.findall (' <div.*?class= "content". *?title= "(. *?)" > (. *?) </div> ', Unicodepage,re. S
Items = []
For item in myitems:
# The first of the item is the title of the Div, which is the time
# The second of the item is the content of the Div, which is the content
Items.append ([Item[0].replace ("\ n", ""), Item[1].replace ("\ n", "")])
return items
# used to load new jokes
def loadpage (self):
# Run if the user does not enter quit
While self.enable:
# If the contents of the pages array are less than 2
If Len (Self.pages) < 2:
Try
# Get the jokes from the new page
MyPage = self. GetPage (str (self.page))
Self.page + 1
Self.pages.append (MyPage)
Except
print ' Can't link embarrassing encyclopedia! '
Else
Time.sleep (1)
def showpage (self,nowpage,page):
For items in Nowpage:
Print U '%d pages '% page, items[0], items[1]
Myinput = Raw_input ()
if myinput = = "Quit":
Self.enable = False
Break
def Start (self):
Self.enable = True
page = Self.page
Please wait while print U ' is loading ... '
# Create a new thread to load the scripts in the background and store
Thread.start_new_thread (self. LoadPage, ())
#-----------Load deal with embarrassing encyclopedia-----------
While self.enable:
# If the Self's page array contains elements
If Self.pages:
Nowpage = Self.pages[0]
Del Self.pages[0]
Self. ShowPage (Nowpage,page)
page = 1
#-----------The entrance to the program-----------
Print U "" "
---------------------------------------
Program: Embarrassing hundred Reptiles
Version: 0.3
Author: why
Date: 2014-06-03
Language: Python 2.7
Action: Input quit quit reading embarrassing encyclopedia
Function: Press ENTER to browse today's embarrassing hundred hot spots
---------------------------------------
"""
Print U ' Please press ENTER to browse today's embarrassing hundred content: '
Raw_input (")"
MyModel = Spider_model ()
Mymodel.start ()
Q&a:
1. Why is there a period of time to show that the encyclopedia is not available?
A: Some time ago because of the scandal encyclopedia added header test, resulting in the inability to crawl, need to simulate header in code. Now the code has been modified to work properly.
2. Why do I need to create a separate thread?
A: The basic process is this: the crawler in the background of a new thread, has climbed the two pages of the embarrassing encyclopedia, if the remaining less than two pages, then climb another page. The user presses the carriage return only to obtain the newest content from the stock, but does not obtain the Internet, therefore the browsing is smoother. You can also put the load on the main thread, but this can cause the problem of too long waiting time during the crawl.