No. 354, Python distributed crawler build search engine Scrapy explaining-data collection (Stats Collection)

Source: Internet
Author: User

No. 354, Python distributed crawler build search engine Scrapy explaining-data collection (Stats Collection)

Scrapy provides a convenient mechanism for collecting data. The data is stored in Key/value mode, and the values are mostly count values. This mechanism is called the Data Collector (Stats Collector) and can be used through the Crawler API's properties Stats
Data collectors are always available, regardless of whether data collection (stats collection) is turned on or off. So you can import into your own module and use its API (add value or set a new state key (stat keys)). This is done to simplify the way data is collected: You should not use more than one line of code to collect your Spider,scrpay extensions or any state that you use in the Data Collector code.

Another feature of the Data Collector is that (in the enabled state) it is efficient (in the case of a shutdown) that is very efficient (almost imperceptible).

The Data Collector maintains a state table for each spider. When the spider starts, the table automatically opens and shuts down automatically when the spider is closed.

Data collection various functions

stats.set_value (' Data name ', data value) setting data
stats.inc_value (' Data name ') increases data value, self-increment 1
stats.max_value (' Data name ', value) sets the data when the new value is larger than the original value
stats.min_value (' Data name ', value) sets the data when the new value is more than the original value of the hour
stats.get_value (' Data name ') gets the data value
stats.get_stats () Get all data

#-*-coding:utf-8-*-Importscrapy fromScrapy.httpImportrequest,formrequestclassPachspider (scrapy. Spider):#To define reptiles, you must inherit scrapy. SpiderName ='Pach'                                           #Set crawler nameAllowed_domains = ['www.dict.cn']#Crawl domain Names    defStart_requests (self):#the Start URL function will replace the Start_urls        return[Request (URL='http://www.dict.cn/9999998888', Callback=Self.parse)]#Use the Data collector to collect all 404 URLs and 404 page countsHandle_httpstatus_list = [404]#set not to filter 404    def __init__(self): Self.fail_urls= []#Create a variable to store 404URL    defParse (self, Response):#callback function        ifResponse.Status = = 404:#determine if the return status code is 404Self.fail_urls.append (Response.url)#append a URL to a list            self.crawler.stats.inc_value ('failed_url')              #set up a data collection, with a value of self-increment, 1 per execution            Print(Self.fail_urls)#Print a 404URL list             Print(self.crawler.stats.get_value ('failed_url'))       #Print data collection values        Else: Title= Response.css ('Title::text'). Extract ()Print(title)

No. 354, Python distributed crawler build search engine Scrapy explaining-data collection (Stats Collection)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.