Python's scrapy crawler Framework Simple Learning Notes

Source: Internet
Author: User
Tags xpath
A simple configuration to get the content on a single page.
(1) Create Scrapy Project

Scrapy Startproject Getblog

(2) Edit items.py

#-*-Coding:utf-8-*-# Define Here the models for your scraped items## see documentation in:# http://doc.scrapy.org/en/ latest/topics/items.html from Scrapy.item Import Item, Field class Blogitem (item):  title = field ()  desc = field ()

(3) Under the Spiders folder, create the blog_spider.py

You need to familiarize yourself with the XPath selection and feel similar to the jquery selector, but not as comfortable as the jquery selector (w3school Tutorial: http://www.w3school.com.cn/xpath/).

# Coding=utf-8 from scrapy.spider import spiderfrom getblog.items import blogitemfrom scrapy.selector import Selector
  
   class Blogspider (Spider):  # identity name  = ' blog '  # start address  start_urls = [' http://www.cnblogs.com/']   Def parse (self, Response):    sel = Selector (response) # Xptah Selector    # Select all of the div tag contents containing the class attribute with a value of ' Post_item '    # below All contents of the 2nd div    sites = sel.xpath ('//div[@class = "Post_item"]/div[2] ')    items = [] for    site in sites:      item = Blogitem ()      # Select the H3 tag under the A tag, the text content ' text () '      item[' title '] = Site.xpath (' H3/a/text () '). Extract ()      # IBID., p label text content ' text () '      item[' desc '] = Site.xpath (' p[@class = ' post_item_summary ']/text () '). Extract ()      Items.append (item)    return items
  

(4) Run,

Scrapy Crawl Blog # can

(5) Output file.

The output configuration is performed in the settings.py.

# output File Location Feed_uri = ' blog.xml ' # output file format can be Json,xml,csvfeed_format = ' xml '

The output location is under the project root folder.

Second, the basic--Scrapy.spider.Spider

(1) using the interactive shell

dizzy@dizzy-pc:~$ scrapy Shell "http://www.baidu.com/"

2014-08-21 04:09:11+0800 [scrapy] info:scrapy 0.24.4 started (bot:scrapybot) 2014-08-21 04:09:11+0800 [Scrapy] Info:opti Onal features Available:ssl, HTTP11, django2014-08-21 04:09:11+0800 [scrapy] Info:overridden settings: {' Logstats_inter VAL ': 0}2014-08-21 04:09:11+0800 [scrapy] info:enabled extensions:telnetconsole, Closespider, WebService, CoreStats, Sp Iderstate2014-08-21 04:09:11+0800 [scrapy] info:enabled Downloader middlewares:httpauthmiddleware, Downloadtimeoutmiddleware, Useragentmiddleware, Retrymiddleware, Defaultheadersmiddleware, MetaRefreshMiddleware, Httpcompressionmiddleware, Redirectmiddleware, Cookiesmiddleware, Chunkedtransfermiddleware, Downloaderstats2014-08-21 04:09:11+0800 [scrapy] info:enabled spider Middlewares:httperrormiddleware, Offsitemiddleware, Referermiddleware, Urllengthmiddleware, depthmiddleware2014-08-21 04:09:11+0800 [scrapy] INFO: Enabled Item pipelines:2014-08-21 04:09:11+0800 [scrapy] debug:telnet console listening on 127.0.0.1:60242014-08-21 04:09:11+0800 [scrapy] debug:web service listening on 127.0.0.1:60812014-08-21 04:09:11+0800 [default] Info:spid  ER opened2014-08-21 04:09:12+0800 [default] debug:crawled (200)
 
  
 (Referer:none) [s] Available scrapy Objects:[s] Crawler
 
  
   [s] Item {}[s] request 
    
   [s] response <200 Http://www.baidu.com/>[s] Settings 
   
    [s] Spider 
    
     [s] useful shortcuts:[s] shelp () Shell Help (print this help) [s] Fetch (REQ_OR_URL) Fetch Reque   St (or URL) and update local objects[s] View (response) View response in a browser >>> # Response.body All content returned # Response.xpath ('//ul/li ') can test all the XPath content more important, if your type response.selector you'll access a selector o Bject You can use toquery the response, and convenient shortcuts like Response.xpath () and response.css () mapping Torespon Se.selector.xpath () and Response.selector.css ()
     
 
    

  
 

That is, it is convenient to view the XPath selection correctly in an interactive form. Previously, it was selected with the F12 of Firefox, but it was not guaranteed that the content would be selected correctly every time.

You can also use:

Scrapy Shell ' http://scrapy.org '--nolog# parameter--nolog no log

(2) Example

From scrapy import spiderfrom scrapy_test.items import Dmozitem  class Dmozspider (Spider):  name = ' DMOZ '  Allowed_domains = [' dmoz.org ']  start_urls = [' Http://www.dmoz.org/Computers/Programming/Languages/Python/Books /',         ' http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/, '         ]   def parse (self, Response): For    sel in Response.xpath ('//ul/li '):      item = Dmozitem ()      item[' title ' = Sel.xpath (' A/text () '). Extract ()      item[' link ' = Sel.xpath (' A ' @href '). Extract ()      item[' desc '] = Sel.xpath (' text () '). Extract ()      Yield Item

(3) Save file

Can be used to save the file. Format can be json,xml,csv

Scrapy Crawl-o ' A.json '-t ' json '

(4) Creating Spiders with templates

Scrapy genspider Baidu baidu.com #-*-coding:utf-8-*-import scrapy  class Baiduspider (scrapy. Spider):  name = "Baidu"  allowed_domains = ["baidu.com"]  start_urls = (    ' http://www.baidu.com/',  )   def parse (self, Response):    Pass

This paragraph first, remember the previous 5, now can only think of 4 to come. :-(

Please remember to click the Save button. Otherwise it is very affect the mood (⊙o⊙)!

Third, advanced--Scrapy.contrib.spiders.CrawlSpider

Example

#coding =utf-8from scrapy.contrib.spiders Import crawlspider, rulefrom scrapy.contrib.linkextractors Import Linkextractorimport scrapy  class Testspider (crawlspider):  name = ' Test '  allowed_domains = [' example.com ' ]  start_urls = [' http://www.example.com/']  rules = (    # tuple    Rule (Linkextractor (allow= (' category\.php ') ,), deny= (' subsection\.php ',)),    Rule (Linkextractor (allow= (' item\.php ',)), callback= ' Pars_item '),  )   def parse_item (self, Response):    self.log (' Item page:%s '% response.url)    item = scrapy. Item ()    item[' id '] = Response.xpath ('//td[@id = ' item_id ']/text () '). Re (' ID: (\d+) ')    item[' name '] = Response.xpath ('//td[@id = ' item_name ']/text () '). Extract ()    item[' description '] = Response.xpath ('//td[@id = ' Item_description "]/text ()"). Extract ()    return item

And the rest are xmlfeedspider.

    • Class Scrapy.contrib.spiders.XMLFeedSpider
    • Class Scrapy.contrib.spiders.CSVFeedSpider
    • Class Scrapy.contrib.spiders.SitemapSpider

Iv. Selectors

  >>> from scrapy.selector import selector  >>> from scrapy.http import htmlresponse

Flexibility to use. css () and. XPath () to quickly select target data

About selectors, you need to look into it. XPath () and CSS (), and continue to be familiar with the regular.

When choosing by class, try using CSS () to select, and then use XPath () to select the element's familiarity

V. Item Pipeline

Typical use for item pipelines is:
Cleansing HTML Data # Clear HTML
Validating scraped data (checking that the items contain certain fields) # Validate
checking for duplicates (and dropping them) # Check for duplicates
storing the scraped item in a database # deposit
(1) Verifying data

From scrapy.exceptions import Dropitem class Pricepipeline (object):  vat_factor = 1.5  def process_item (self, Item, spider):    if item[' price ':      if item[' Price_excludes_vat ':        item[' price '] *= self.vat_factor    else:      raise Dropitem (' Missing price in%s '% item)

(2) Write JSON file

Import JSON class Jsonwriterpipeline (object):  def __init__ (self):    self.file = open (' Json.jl ', ' WB ')  def Process_item (self, item, spider): Line    = Json.dumps (Dict (item)) + ' \ n '    self.file.write (line)    return Item

(3) Check repeat

From scrapy.exceptions import Dropitem class Duplicates (object):  def __init__ (self):    Self.ids_seen = set ()  def process_item (self, item, spider):    if item[' id '] in self.ids_seen:      raise Dropitem (' Duplicate item found :%s '% item)    else:      self.ids_seen.add (item[' id ')      return item

As for writing data to the database, it should also be simple. In the Process_item function, the item is stored in it.

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.