Original article, link: http://blog.csdn.net/u012150179/article/details/38226253
+
(I) connection. py is responsible for instantiating the redis connection according to the configuration in setting. Called by dupefilter and schedfilter. In short, this module is used for redis access.
(Ii) dupefilter. py is responsible for executing requst deduplication, which is very skillful. It uses the redis set data structure. Note that schedfilter does not use the dupefilter key used in this module for Request Scheduling, but uses the queue implemented in the queue. py module.
When the request is not repeated, it is saved to the queue and popped up during scheduling.
(Iii) The function of queue. py is described in section II, but three methods of queue are implemented here:
Spiderqueue, spiderpriorityqueue, and spiderstack of Lifi. The default value is second, which is the reason for the analysis in the previous article (link :).
(Iv) pipelines. py is used for distributed processing. It stores items in redis for distributed processing.
In addition, it can be found that pipelines is also written. The encoding implementation here is different from the situation analyzed in the article (link :). Since the configuration needs to be read here, from_crawler () is used () function.
(V) schedler. py this extension is an alternative to the scheduler self-contained in scrapy (pointed out in the scheduler variable of settings), which is used to implement distributed scheduling of crawlers. The data structure it uses comes from the data structure implemented in the queue.
Scrapy-redis implements two types of distribution: crawler distribution and item processing distribution are implemented by the module scheduler and the module pipelines. The other modules are used as auxiliary function modules.
(Vi) spider. the spider designed by Py reads the URL to be crawled from redis and then performs the crawling. If more URLs are returned during the crawling process, continue until all requests are completed. Then, read the URL from redis and repeat the process.
Analysis: This spider uses the connect signals. spider_idle signal to monitor the crawler status. When idle is used, a new make_requests_from_url (URL) is returned to the engine and then submitted to the Scheduler for scheduling.
For comments in code, you can see: https://github.com/younghz/scrapy-redis
Original article, link: http://blog.csdn.net/u012150179/article/details/38226253