Getting started with python crawlers-sharing hundreds of image Crawlers

Source: Internet
Author: User
This article mainly introduces the python crawler getting started tutorial to share the code of hundreds of images crawlers. In this article, a crawler is required to capture the encyclopedia connotation of the anecdote ,, if you want to learn python and write crawlers, you can not only learn and practice python on a point-by-point basis, but also make crawlers useful and interesting, A large number of repetitive downloads and statistics can be completed by writing a crawler.

To use python to write crawlers requires the basic knowledge of python, several network-related modules, regular expressions, and file operations. I learned about it on the Internet yesterday and wrote a crawler to automatically download the image in "Baishi Encyclopedia. The source code is as follows:

The Code is as follows:


#-*-Coding: UTF-8 -*-
# The above sentence allows the code to support Chinese Characters

#---------------------------------------
# Program: hundreds of image Crawlers
# Version 0.1
# Author: Zhao Wei
# Date: 2013-07-25
# Programming language: Python 2.7
# Note: You can set the number of pages to be downloaded. No more abstraction and interaction optimizations were made.
#---------------------------------------

Import urllib2
Import urllib
Import re

# Regular expression used to capture the image address
Pat = re. compile ('

\ N ')

# URL used to synthesize a webpage
Nexturl1 = "http://m.qiushibaike.com/imgrank/page"
Nexturl2 = "? S = 4582487 & slow"

# Page count
Count = 1

# Set the number of captured pages
While count <3:

Print "Page" + str (count) + "\ n"
Myurl = nexturl1 + str (count) + nexturl2
Myres = urllib2.urlopen (myurl) # capture webpages
Mypage = myres. read () # read webpage content
Ucpage = mypage. decode ("UTF-8") # Transcoding

Mat = pat. findall (ucpage) # capture image addresses using regular expressions

Count + = 1;

If len (mat ):
For item in mat:
Print "url:" + item + "\ n"
Fnp = re. compile ('/(\ w + \. \ w +) $') # Name of the image file separated by the following three lines
Fnr = fnp. findall (item)
Fname = fnr [0]
Urllib. urlretrieve (item, fname) # download the image

Else:
Print "no data"

Usage: Create a practice folder, save the source code as the qb. py file, and put it in the practice folder. Execute python qb. py in the command line to download the image. You can modify the while statement in the source code to set the number of pages to be downloaded.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.