Python crawler Tutorial -30-scrapy crawler Framework Introduction

Source: Internet
Author: User

Learn the Scrapy crawler framework from the beginning of this article

Python crawler Tutorial -30-scrapy crawler Framework Introduction
    • Framework: The framework is for the same similar part, the code does not go wrong, and we can focus on our own part of the
    • Common Crawler Frames:
      • Scrapy
      • Pyspider
      • Crawley
    • Scrapy is an application framework written to crawl Web site data and extract structural data. Can be applied in a series of programs, including data mining, information processing, or storing historical data.
    • Scrapy Official documents
      • https://doc.scrapy.org/en/latest/
      • Http://scrapy-chs.readthedocs.io/zh_CN/latest/index.html
Installation of Scrapy
    • Can be installed directly in the Pycharm
      • "Pycharm" > "File" > "Settings" > "Project Interpreter" > "+" > "scrapy" > "Install"
      • Specific operation:
    • Click on the lower left corner of the install to wait quietly
Test if the scrapy is installed successfully
    • Go to your current environment
    • Enter the Scrapy command

    • This shows that the installation was successful L
Scrapy Overview
    • Contains individual parts
      • Scrapyengine: Nerve center, brain, core
      • Scheduler Scheduler: Responsible for processing requests, request requests from the engine, scheduler needs to process, then Exchange engine
      • Downloader Downloader: Requests the engine sent the request, get response
      • Spider Crawler: Responsible for the download of the pages/results of the decomposition, decomposition into data + links
      • Itempipeline Pipeline: Handling Item in detail
      • Downloadermiddleware Download middleware: Custom Download features extension components
      • Spidermiddleware Crawler middleware: Extending the spider function
    • Streaming chart
    • Green arrows are the flow of data
    • Started by Spider requests, requests, responses, Items
Reptile Project General Process
    • 1. New project: scrapy startproject XXX project name
    • 2. Identify the target/product to be crawled: write item.py
    • 3. Make crawler: Address spider/xxspider.py is responsible for decomposition, extract the downloaded data
    • 4. Storage content: pipelines.py
Module Introduction
  • Itempipeline
    • Corresponding pipelines file
    • After the crawler extracts the data into the item, the data stored in the item needs to be further processed, such as cleaning, pest, storage, etc.
    • Pipeline need to handle Process_item function
    • Process_item
      • The spider's extracted item is passed in as a parameter, and the spider is passed in.
      • This method must implement the
      • Must return an item object, the discarded item will not be pipeline
  • _ init _: Constructor
    • Perform some necessary parameter initialization.
  • Open_spider (spider):
    • The Spider object is called when it is turned on.
  • Close_spider (spider):
    • Called when the Spider object is closed
  • Spider
    • corresponding to the file under the folder spider
    • _ init _: Initialize the crawler name, start _urls list
    • Start_requests: Generate requests object to scrapy download and return response
    • Parse: According to the returned response parse out the corresponding Item,item automatically enter pipeline: if necessary, the resolution Url,url automatically handed to the requests module, has been circulating
    • Start_requests: This method can be called once, read the start _urls content and start the loop process
    • Name: Set crawler name
    • Start_urls: Set the URL to start the first crawl
    • Allow_domains:spider List of domain names allowed to crawl
    • Start_request (self): called only once
    • Parse: Detection code
    • LOG: Logging
Middleware (Downloadermiddlewares)
  • What is middleware?
  • Middleware is a layer of components in the middle of the engine and the downloader, which can have multiple
  • Referring to the above flowchart, we understand the middleware as a channel, simply said, in the request/response and other transmission, in the process of setting up a checkpoint, for example:
    • 1. Disguise of Identity: UserAgent, we disguise identity, not at the beginning of the request to set up, but in the process of the request, set up the middleware, when the sending request is detected, stop the request header, modify the UserAgent value
    • 2. Filter the response data: The first thing we get is the entire page, assuming an operation that requires us to filter out all the images, we can set up a middleware in the response process.
    • More abstract, probably not very well understood, but the process is actually very simple
  • In the Middlewares file
  • Need to be set in settings to be in effect
  • Generally a middleware to complete a function
  • One or more of the following methods must be implemented
    • Process_request (self, request, spider)
      • Called in the process of the request
      • Must return None or Response or Request or raise ignorerequest
        • If return none:scrapy will continue to process request
        • If return request:scrapy stops calling Process_request and flushes the Request returned by the Dispatch
        • Returning Response:scrapy will not call the other process_request or process _exception, directly returning the Response as a result, and invoking process _response
    • Process_response (self, request, spider)
    • Called automatically each time the result is returned
  • Next Link: Python crawler tutorial -31-creating a scrapy Crawler Framework Project
  • Bye
    • This note does not allow any person or organization to reprint

Python crawler Tutorial -30-scrapy crawler Framework Introduction

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.