We write ordinary script, from a Web site to get a file download URL, and then download, directly write the data to the file or save it, but this needs our own 1.1 points of writing, and repeated utilization is not high, in order not to repeat the wheel, Scrapy provides a very smooth download file way, You just need to write and write and use it.
mat.py file
1 #-*-coding:utf-8-*-2 Importscrapy3 fromScrapy.linkextractorImportLinkextractor4 fromWeidashang.itemsImportmatplotlib5 6 classMatspider (scrapy. Spider):7Name ="Mat"8Allowed_domains = ["matplotlib.org"]9Start_urls = ['Https://matplotlib.org/examples']Ten One defParse (self, Response):
#抓取每个脚本文件的访问页面, get it after download Alink = linkextractor (restrict_css='Div.toctree-wrapper.compound li.toctree-l2') - forLinkinchlink.extract_links (response): - yieldScrapy. Request (url=link.url,callback=self.example) the - defExample (self,response):
#进入每个脚本的页面, grab the source file button and combine with Base_url to form a full URL -href = Response.css ('a.reference.external::attr (HREF)'). Extract_first () -URL =response.urljoin (HREF) +Example =matplotlib () -example['File_urls'] =[url] + returnExample
pipelines.py
1 class Myfileplipeline (filespipeline): 2 def File_path (self, request, Response=none, info=None):3 path = Urlparse ( Request.url). Path4 return join (basename (dirname (path)), basename (path))
settings.py
1 item_pipelines = {2 'weidashang.pipelines.MyFilePlipeline' : 1,3}4'examples_src'
items.py
class matplotlib (Item): = field () = field ()
run.py
1 from Import Execute 2 Execute (['scrapy'crawl'mat ','-o','example.json')
Python crawler's scrapy file download