Scrapy engine is a central processor. It is connected to four modules: scheduler, downloadermiddleware (downloader middleware), spidermiddleware (Spider middleware), and item pipeline, communication between modules must be forwarded by the engine. First, scrapy engine distributes the seed URLs to each spider according to the domains of start_urls of spider. The Spider generates a request based on each URL to be captured and returns it to the engine, the engine forwards URL-included crawling requests to Scheduler for scheduling. After scheduler is scheduled, the system returns the URL to be captured to the engine. The engine sends the URL to downloader (which is processed by downloadermiddleware in the middle, downloader middleware is responsible for setting download behaviors, such as setting the HTTP proxy, setting the cookie to disable, setting the delay time, and setting the cache mechanism). After downloader is captured, it returns response to the engine, engine transfers response to SPIDER (spidermiddleware is used for processing in the middle, and spidermiddleware is mainly responsible for specifying crawler behavior, such (Row setting and primary domain filtering). After Spider parses, it returns the item and the new link requests to be captured to the engine. Engine transfers the item to item pipeline and transfers requests to Scheduler for scheduling.