Scrapy-redis source code analysis

Source: Internet
Author: User

Original article, link: http://blog.csdn.net/u012150179/article/details/38226253

+

(I) connection. py is responsible for instantiating the redis connection according to the configuration in setting. Called by dupefilter and schedfilter. In short, this module is used for redis access.

(Ii) dupefilter. py is responsible for executing requst deduplication, which is very skillful. It uses the redis set data structure. Note that schedfilter does not use the dupefilter key used in this module for Request Scheduling, but uses the queue implemented in the queue. py module.
When the request is not repeated, it is saved to the queue and popped up during scheduling.

(Iii) The function of queue. py is described in section II, but three methods of queue are implemented here:
Spiderqueue, spiderpriorityqueue, and spiderstack of Lifi. The default value is second, which is the reason for the analysis in the previous article (link :).

(Iv) pipelines. py is used for distributed processing. It stores items in redis for distributed processing.
In addition, it can be found that pipelines is also written. The encoding implementation here is different from the situation analyzed in the article (link :). Since the configuration needs to be read here, from_crawler () is used () function.

(V) schedler. py this extension is an alternative to the scheduler self-contained in scrapy (pointed out in the scheduler variable of settings), which is used to implement distributed scheduling of crawlers. The data structure it uses comes from the data structure implemented in the queue.

Scrapy-redis implements two types of distribution: crawler distribution and item processing distribution are implemented by the module scheduler and the module pipelines. The other modules are used as auxiliary function modules.

(Vi) spider. the spider designed by Py reads the URL to be crawled from redis and then performs the crawling. If more URLs are returned during the crawling process, continue until all requests are completed. Then, read the URL from redis and repeat the process.

Analysis: This spider uses the connect signals. spider_idle signal to monitor the crawler status. When idle is used, a new make_requests_from_url (URL) is returned to the engine and then submitted to the Scheduler for scheduling.


For comments in code, you can see: https://github.com/younghz/scrapy-redis

Original article, link: http://blog.csdn.net/u012150179/article/details/38226253

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.