0 Basic Writing Python crawler crawl Baidu Bar and stored to local TXT file improved version _python

Source: Internet
Author: User

Baidu Bar Crawler production and embarrassing hundred crawler production principle is basically the same, are through the View Source button key data, and then store it to the local TXT file.

Project content:

Use Python to write the web crawler Baidu Bar.

How to use:

Create a new bugbaidu.py file, and then copy the code into it, and then double-click to run it.

Program function:

Post the content of the landlord posted in packaging txt stored to local.

Principle Explanation:

First, first glance at a bar, click on the landlord and click on the second page after a little change in the URL, has become:
Http://tieba.baidu.com/p/2296712428?see_lz=1&pn=1
Can be seen, see_lz=1 is only to see the landlord, Pn=1 is the corresponding page number, remember this for future preparation.
This is the URL we need to take advantage of.
The next step is to view the page source.
First of all, the problem is dug out to store files when it will be used.
You can see that Baidu uses the GBK code, the title uses the H1 mark:

Copy Code code as follows:

<H1 class= "Core_title_txt" title= "original" fashion chief (about fashion, fame, career, love, inspirational) "> Original" Fashion Chief (about fashion, fame, career, love, inspirational)

Similarly, the body part with div and class composite tags, the next thing to do is to use regular expressions to match.
Run Screenshots:

Generated TXT file:

Copy Code code as follows:

#-*-Coding:utf-8-*-
#---------------------------------------
# program: Baidu Paste the crawler
# version: 0.5
# Author: why
# Date: 2013-05-16
# language: Python 2.7
# operation: After entering the URL automatically only look at the landlord and save to the local file
# function: The content of the landlord published packaging txt stored to local.
#---------------------------------------

Import string
Import Urllib2
Import re

#-----------Handle the various labels on the page-----------
Class Html_tool:
# match \ t or \ n or a space or a hyperlink or a picture with a non-greedy pattern
Bgnchartononerex = Re.compile ("(\t|\n| |<a.*?>|)")

# match arbitrary <> tags with non-greedy patterns
Endchartononerex = Re.compile ("<.*?>")

# match arbitrary <p> tags with non-greedy patterns
Bgnpartrex = Re.compile ("<p.*?>")
Chartonewlinerex = Re.compile ("(<br/>|</p>|<tr>|<div>|</div>)")
Chartonexttabrex = Re.compile ("<td>")

# convert some HTML symbol entities to original symbols
Replacetab = [("<", "<"), (">", ">"), ("&", "&"), ("&", "\"), ("", "")]

def Replace_char (self,x):
x = self. Bgnchartononerex.sub ("", X)
x = self. Bgnpartrex.sub ("\ n", X)
x = self. Chartonewlinerex.sub ("\ n", X)
x = self. Chartonexttabrex.sub ("T", X)
x = self. Endchartononerex.sub ("", X)

For T in Self.replacetab:
x = X.replace (t[0],t[1])
return x

Class Baidu_spider:
# Declare related properties
def __init__ (Self,url):
Self.myurl = URL + '? see_lz=1 '
Self.datas = []
Self.mytool = Html_tool ()
Print U ' has started Baidu post-stick crawler, clicks '

# Initialize the load page and store it in a transcoding format
def Baidu_tieba (self):
# Read the original information of the page and transfer it from the GBK
MyPage = Urllib2.urlopen (Self.myurl). Read (). Decode ("GBK")
# Calculate how many pages a landlord has published
EndPage = Self.page_counter (myPage)
# get the title of the Post
title = Self.find_title (myPage)
Print U ' article name: ' + title
# Get the final data
Self.save_data (Self.myurl,title,endpage)

#用来计算一共有多少页
def page_counter (self,mypage):
# matching "Total <span class=" Red ">12</span> page" to get a total number of pages
Mymatch = Re.search (R ' class= "Red" > (\d+?) </span> ', MyPage, re. S
If Mymatch:
endpage = Int (Mymatch.group (1))
Print U ' crawler report: Found landlord total%d page original content '% EndPage
Else
EndPage = 0
Print U ' crawler report: Unable to calculate how many pages the landlord published content! '
return EndPage

# to find the title of the Post
def find_title (self,mypage):
# match

Mymatch = Re.search (R ' title = U ' Temporary untitled '
If Mymatch:
title = Mymatch.group (1)
Else
Print U ' crawler report: Unable to load article title! '
# The filename cannot contain the following characters: \/: *? "< > |
title = Title.replace (' \ \ '). Replace ('/', '). Replace (': ', '). Replace (' * ', '). Replace ('? '), '). Replace (' "', '). Replace (' > ', '). Replace (' < ', '). Replace (' | ', ')
return title

# used to store the content posted by the landlord
def save_data (self,url,title,endpage):
# load page data into array
Self.get_data (Url,endpage)
# Open Local File
f = open (title+ ' txt ', ' w+ ')
F.writelines (Self.datas)
F.close ()
Print U ' crawler report: File downloaded to local and packaged into TXT file '
Print U ' please press any key to exit ... '
Raw_input ();

# Get the source of the page and store it in an array
def get_data (self,url,endpage):
url = url + ' &pn= '
For I in Range (1,endpage+1):
Print U ' Crawler report: Crawler%d is loading ... '% i
myPage = urllib2.urlopen (url + str (i)). Read ()
# Process the HTML code in the MyPage and store it in the Datas
Self.deal_data (Mypage.decode (' GBK '))

# Pull content out of the page code
def deal_data (self,mypage):
MyItems = Re.findall (' id= ' post_content.*?> (. *?) </div> ', Mypage,re. S
For item in myitems:
data = Self.myTool.Replace_Char (Item.replace ("\ n", ""). Encode (' GBK '))
Self.datas.append (data+ ' \ n ')

#--------The entrance to the program------------------
Print U "" "#---------------------------------------
# program: Baidu Paste the crawler
# version: 0.5
# Author: why
# Date: 2013-05-16
# language: Python 2.7
# operation: After entering the URL automatically only look at the landlord and save to the local file
# function: The content of the landlord published packaging txt stored to local.
#---------------------------------------
"""
# Stick to a novel as an example
# bdurl = ' Http://tieba.baidu.com/p/2296712428?see_lz=1&pn=1 '

Print U ' Please enter the address of the last number string: '
Bdurl = ' http://tieba.baidu.com/p/' + str (raw_input (U ' http://tieba.baidu.com/p/'))

#调用
Myspider = Baidu_spider (Bdurl)
Myspider.baidu_tieba ()

The above is improved after the crawl Baidu bar code, very simple and practical, I hope to be helpful to everyone

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.