Python Crawler Introduction Tutorial embarrassing hundred pictures Reptile code sharing _python

Source: Internet
Author: User

Learn Python without writing a crawler, not only can learn vitalize, practice using Python, the reptile itself is also useful and interesting, a lot of repetitive download, statistical work can write a crawler complete.

Using Python to write reptiles requires the basics of Python, several modules that involve the network, regular expressions, and file manipulation. Yesterday I studied online, and wrote a crawler to automatically download the "embarrassing encyclopedia" inside the picture. The source code is as follows:

Copy Code code as follows:

#-*-Coding:utf-8-*-
# The above sentence lets the code support Chinese

#---------------------------------------
# program: embarrassing hundred picture crawler
# Version: 0.1
# Author: Zhao Wei
# Date: 2013-07-25
# language: Python 2.7
# Description: Can set the download number of pages. No more abstraction and interaction optimizations.
#---------------------------------------

Import Urllib2
Import Urllib
Import re

#正则表达式, the address used to grab pictures
Pat = Re.compile (' <div class= "thumb" >\\n

#用来合成网页的URL
NEXTURL1 = "http://m.qiushibaike.com/imgrank/page/"
NEXTURL2 = "S=4582487&slow"

#页数计数
Count = 1

#设置抓取的页数
While Count < 3:

Print "Page" + str (count) + \ n
Myurl = nexturl1 + str (count) + NEXTURL2
Myres = Urllib2.urlopen (myurl) #抓取网页
MyPage = Myres.read () #读取网页内容
Ucpage = Mypage.decode ("Utf-8") #转码

Mat = Pat.findall (ucpage) #用正则表达式抓取图片地址

Count + 1;

If Len (MAT):
For item in Mat:
Print "URL:" + item + "\ n"
FNP = Re.compile ('/(\w+\.\w+) $ ') #下面三行分离出图片文件的名称
FNR = Fnp.findall (item)
fname = fnr[0]
Urllib.urlretrieve (item, fname) #下载图片

Else
Print "No Data"

How to: Create a new Practice folder, save the source code as a qb.py file, place it in the Practice folder, execute the Python qb.py on the command line, and start downloading the picture. You can modify the page number of the download by modifying the while statement inside the source code.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.