This article mainly introduces the python crawler getting started tutorial to share the code of hundreds of images crawlers. In this article, a crawler is required to capture the encyclopedia connotation of the anecdote ,, if you want to learn python and write crawlers, you can not only learn and practice python on a point-by-point basis, but also make crawlers useful and interesting, A large number of repetitive downloads and statistics can be completed by writing a crawler.
To use python to write crawlers requires the basic knowledge of python, several network-related modules, regular expressions, and file operations. I learned about it on the Internet yesterday and wrote a crawler to automatically download the image in "Baishi Encyclopedia. The source code is as follows:
The Code is as follows:
#-*-Coding: UTF-8 -*-
# The above sentence allows the code to support Chinese Characters
#---------------------------------------
# Program: hundreds of image Crawlers
# Version 0.1
# Author: Zhao Wei
# Date: 2013-07-25
# Programming language: Python 2.7
# Note: You can set the number of pages to be downloaded. No more abstraction and interaction optimizations were made.
#---------------------------------------
Import urllib2
Import urllib
Import re
# Regular expression used to capture the image address
Pat = re. compile ('
\ N ')
# URL used to synthesize a webpage
Nexturl1 = "http://m.qiushibaike.com/imgrank/page"
Nexturl2 = "? S = 4582487 & slow"
# Page count
Count = 1
# Set the number of captured pages
While count <3:
Print "Page" + str (count) + "\ n"
Myurl = nexturl1 + str (count) + nexturl2
Myres = urllib2.urlopen (myurl) # capture webpages
Mypage = myres. read () # read webpage content
Ucpage = mypage. decode ("UTF-8") # Transcoding
Mat = pat. findall (ucpage) # capture image addresses using regular expressions
Count + = 1;
If len (mat ):
For item in mat:
Print "url:" + item + "\ n"
Fnp = re. compile ('/(\ w + \. \ w +) $') # Name of the image file separated by the following three lines
Fnr = fnp. findall (item)
Fname = fnr [0]
Urllib. urlretrieve (item, fname) # download the image
Else:
Print "no data"
Usage: Create a practice folder, save the source code as the qb. py file, and put it in the practice folder. Execute python qb. py in the command line to download the image. You can modify the while statement in the source code to set the number of pages to be downloaded.