Python crawler-scrapy Framework Basic use

Source: Internet
Author: User

2017-08-01 22:39:50

First, the basic command of Scrapy crawler

Scrapy is a professional crawler framework designed for continuous operation, providing an operational scrapy command line.

    • Scrapy command-line format

    • Scrapy Common Commands

    • Reasons for using the command line

The command line (not the graphical interface) is easier to automate, for scripting control in essence, Scrapy is for programmers, and functionality (not interfaces) is more important.

Second, a basic example of Scrapy crawler

Presentation HTML page address: http://python123.io/ws/demo.html

Step one: Build a scrapy crawler

Select a folder, such as E:\python, and then execute the following command.

A project named Python123demo is generated under the Python folder, and the file structure of the project is:

Step two: Create a scrapy crawler in the project

Use the CD to enter the E:\python\python123demo folder, and then execute the following command.

The command functions:
(1) generate a spider named demo
(2) Add the code file under the Spiders directory demo.py

This command is used only to generate demo.py, which can also be generated manually

Step three: Configure the resulting spider crawler

The demo file is a spider created using the Genspider command.

    • Inherit from Scrapy. Spider
    • Name= ' demo ' explains the crawler's name is demo
    • Allowed_domains refers to crawling Web sites only to crawl site links under that domain name
    • Star_urls refers to the crawl URL of one or more of the starting crawls
    • Parse () handles the response and discovers a new URL crawl request

Configuration: (1) Initial URL address (2) How to parse after fetching page

Step four: Run crawlers, get web pages

Execute the following code:

The demo crawler is executed and the capture page is stored in the demo.html

There is also an equivalent expression:

Third, the basic use of Scrapy crawler

These four steps involve three classes: request class, Response class, item class;

    • Request Class

Class Scrapy.http.Request (): The Request object represents an HTTP request that is generated by the spider and executed by downloader.

    • Response class

Class Scrapy.http.Response (): The Response object represents an HTTP response , generated by downloader and processed by the spider .

    • Item class

Class Scrapy.item.Item (): The Item object represents an information content extracted from an HTML page , generated by the spider, processed by the item pipeline , and item similar to a dictionary type, which can be manipulated by the dictionary type.

Python crawler-scrapy Framework Basic use

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.