<title>Scrapy Crawling Web Basic concepts</title> Scrapy Crawl page Basic Concepts How do I build project with Scrapy?
scrapy startproject xxx
How do I crawl pages with scrapy?
import scrapy
fromimport CrawlSpider
fromimport Request
fromimport Selector
xxx=selector.xpath(xxxxx).extract()
File structure of the Scrapy
Project includes the following:
- items.py
- settings.py
- pipelines.py
1. items.py
Item objects is simple containers used to collect the scraped data. They provide a dictionary-like API with a convenient syntax for declaring their available Fields.--scrapy official brochure
items.py defines data that needs to be crawled and needs to be processed later
2. settings.py
The Scrapy settings allows you and customize the behaviour of all scrapy components, including the core, extensions, Pipeli NES and Spiders Themselves.--scrapy official brochure
settings.py file Configuration scrapy, modify User-agent, set crawl interval, set agent, configure various middleware, etc.
3. pipelines.py
After an item have been scraped by a spider, it's sent to the item Pipeline which process it through several components th At is executed Sequentially.--scrapy official brochure
The pipelines.py is used to store the function of performing post-processing, which separates the data from crawling and processing.
Basic concepts about the Scrapy framework