Python crawler bulk Download picture __python

Source: Internet
Author: User

Work overtime today Ah, bitter ah.

Boring, with Python to write a grasp of the image of the crawler, it feels very good ah, haha

First put the code: (Python version: 2.7.9)

__author__ = ' Bloodchilde ' Import urllib import urllib2 import re import OS class Spider:def __init__ (self): Self.siteurl= "http://sc.chinaz.com/biaoqing/" self.user_agent = ' mozilla/5.0 (Windows NT 6.1; WOW64; trident/7.0; rv:11.0) Like Gecko ' self.headers = {' user-agent ': self.user_agent} def getpage (self,pageindex): U RL = self.siteurl+ "Index_" +str (pageIndex) + ". html" request = Urllib2. Request (url,headers = self.headers) response = Urllib2.urlopen (Request) return Response.read (). Decode ("UTF -8 ") def getcontents (self,pageindex): page = self.getpage (pageIndex) pattern = Re.compile (" "<div . *?class= ' num_1 '. *?>.*?<p>.*?<a.*?href= '. *?target= ' _blank '. *?title= ' (. *?) '. *?>.*?</a>.*?</p>.*?</div> ', Re. S) items = Re.findall (pattern,page) contents=[] for item in Items:contents.append ( [Item[0],item[1]]) return contents def mk_dir (self,path): isexisist = os.path.exists (path) if not Isexi Sist:os.makedirs (path) return True Else:return False def downimage (self , url,dirname): imageUrl = URL request = urllib2. Request (Imageurl,headers = self.headers) response = Urllib2.urlopen (request) imagecontents = Response.read

        () Urlarr = Imageurl.split (U "/") ImageName = str (Urlarr[len (Urlarr)-1]) print imagename Path = u "c:/users/bloodchilde/desktop/image_python/" +dirname self.mk_dir (path) ImagePath = Path+u "/" +im Agename f = open (ImagePath, ' WB ') F.write (imagecontents) f.close () def downloadallpicture (SE 
            Lf,pageindex): Contents = self.getcontents (PageIndex) for list in contents:dirname = List[0] IMAGEURL = list[1] Self.downimage (imageurl,dirname) demo = Spider () for page in range (3,100): Demo.downloadallpicture (page)

 

The effect is as follows:






download so many pictures, instant fix, the following to analyze the program:


first of all, my target page is:

Http://sc.chinaz.com/biaoqing/index_3.html

The program function is to download the expression picture to this webpage

Program Ideas:

1, get the source information of the Web page

2, parse source to get the URL of the picture to download (regular processing)

3, relocation URL to the image of the URL to initiate a request to obtain the URL information, this URL information is actually the picture content contents

4, the image obtained from the above URL can also get the name of the picture (with the suffix name) imagename

5, create the file locally to get the imagename name, the content contents write into the file can


Open http://sc.chinaz.com/biaoqing/index_3.html, check the source, find the code snippet to be processed as follows:



the corresponding regular is:

' <div.*?class= ' num_1 '. *?>.*?<p>.*?<a.*?href= '. *?target= ' _blank '. *?title= ' (. *?) '. *?>.*?</a>.*?</p>.*?</div> ' "

we're going to get title and Src2,title from the code snippet as the folder name, Src2 as the destination picture URL

Does it feel so simple ....

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.