Scrapy
Scrapy is an application framework written to crawl Web site data and extract structural data. It can be used in a series of programs such as data mining, information processing or storing historical data.
It was originally designed for page fetching (more specifically, network crawling) and could also be applied to get the data returned by the API (for example, Amazon Associates Web Services) or a generic web crawler. Scrapy can be used for data mining, monitoring and automated testing in a wide range of applications.
The following is the architecture of the Scrapy, which includes an overview of the components and the data flows that occur in the system (shown by the green arrows).
Data flow
The data flow in Scrapy is controlled by the execution engine, with the following process:
Installation of Scarpy
1 Installation:2linux/mac3-PIP3 Install scrapy4 Windows:5-Installing twsited6 A. PIP3 Install wheel7B. Download Twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#Twisted8C. Go to the download directory and execute PIP3 install twisted-XXXXX.WHL9-Installing ScrapyTenD. PIP3 install scrapy-i http://pypi.douban.com/simple--trusted-host pypi.douban.com One-Installing Pywin32 AE. pip3 install Pywin32-i http://pypi.douban.com/simple--trusted-host pypi.douban.com
Basic use of Scarpy
To create a project:
scrapy startproject Tutorial # the command will create a new Scarpy project
Get :
tutorial/ scrapy.cfg # Project's configuration file tutorial/ # The Python module for the project. You will then join the code __init__. py items.py # project in the item file pipelines.py # The pipelines file in the project settings.py # Project settings file spiders/ # directory where spider code is placed __init__. py
The path of Python--crawler--Introduction to Scrapy