2017-08-01 22:39:50
First, the basic command of Scrapy crawler
Scrapy is a professional crawler framework designed for continuous operation, providing an operational scrapy command line.
- Scrapy command-line format
- Reasons for using the command line
The command line (not the graphical interface) is easier to automate, for scripting control in essence, Scrapy is for programmers, and functionality (not interfaces) is more important.
Second, a basic example of Scrapy crawler
Presentation HTML page address: http://python123.io/ws/demo.html
Step one: Build a scrapy crawler
Select a folder, such as E:\python, and then execute the following command.
A project named Python123demo is generated under the Python folder, and the file structure of the project is:
Step two: Create a scrapy crawler in the project
Use the CD to enter the E:\python\python123demo folder, and then execute the following command.
The command functions:
(1) generate a spider named demo
(2) Add the code file under the Spiders directory demo.py
This command is used only to generate demo.py, which can also be generated manually
Step three: Configure the resulting spider crawler
The demo file is a spider created using the Genspider command.
- Inherit from Scrapy. Spider
- Name= ' demo ' explains the crawler's name is demo
- Allowed_domains refers to crawling Web sites only to crawl site links under that domain name
- Star_urls refers to the crawl URL of one or more of the starting crawls
- Parse () handles the response and discovers a new URL crawl request
Configuration: (1) Initial URL address (2) How to parse after fetching page
Step four: Run crawlers, get web pages
Execute the following code:
The demo crawler is executed and the capture page is stored in the demo.html
There is also an equivalent expression:
Third, the basic use of Scrapy crawler
These four steps involve three classes: request class, Response class, item class;
Class Scrapy.http.Request (): The Request object represents an HTTP request that is generated by the spider and executed by downloader.
Class Scrapy.http.Response (): The Response object represents an HTTP response , generated by downloader and processed by the spider .
Class Scrapy.item.Item (): The Item object represents an information content extracted from an HTML page , generated by the spider, processed by the item pipeline , and item similar to a dictionary type, which can be manipulated by the dictionary type.
Python crawler-scrapy Framework Basic use