Basic use of the rules of the Python crawler scrapy

Source: Internet
Author: User
Tags xpath

Link extractors

Link extractors are those objects that are simply extracted from a Web page ( scrapy.http.Response object) that will eventually be follow linked?

Scrapy provides 2 available link Extractor by default, but you can create your own custom link Extractor by implementing a simple interface to meet your needs?

Each linkextractor has the only public method that extract_links it receives an Response object and returns an scrapy.link.Link object? Link extractors, to instantiate once and extract_links the method extracts the link multiple times based on the different response calls?

Link extractors is CrawlSpider used in the class (available in Scrapy), through a set of rules, but you can also use it in your spider, even if you are not CrawlSpider inheriting from the subclass, because its purpose is simple: Extract the link?

The above is the official website explanation, look on the line, this rule is actually to crawl the whole station content of the writing, First of all we inherit is not Scrapy.spider class, but inherit Crawlspider this class, see the source to understand Crawlspider This class is also inherit Scrapy.spider class.

Specific parameters:

Allow: Here is the re filter, we are actually start_urls plus our match to the specific link under the content. Linkextractor: So the name Incredibles is the link filter, first filter out the link we need to crawl.

Deny: This parameter is just like the argument above, which defines the link we don't want to crawl.

Follow: The default is False, crawling and Start_url compliant URLs. If true, crawls all of the page contents to the Start_urls URL.

Restrict_xpaths: Use XPath expressions, and allow together to filter links. and a similar restrict_css.

Callback: Define the method we want to execute after we get the URL that we can crawl to, and pass in the response content of each link (that is, the page content)

Note: The rule, whether or not callback, is handled by the same _parse_response function, except that he will determine if there are follow and callback

From scrapy.spiders.crawl import Rule, crawlspiderfrom scrapy.linkextractors import linkextractor

  

Example:

From Whole_website.items import doubanspider_bookfrom scrapy.spiders.crawl import Rule, Crawlspiderfrom Scrapy.linkextractors Import Linkextractorclass Doubanspider (crawlspider):    name = "Douban"    allowed_domains = ["Book.douban.com"]    Start_urls = [' https://book.douban.com/']    rules = [        Rule (Linkextractor (allow= ' subject/\d+ '), callback= ' Parse_items)    ]    def parse_items (self, Response):        items = Doubanspider_book ()        items[' name '] = Response.xpath ('//*[@id = ' wrapper ']/h1/span/text () '). Extract_first ()        items[' author '] = Response.xpath ('//*[@ Id= "Info"]//a/text () '). Extract ()        data = {' Book_name ': items[' name '],                ' book_author ': items[' author ']                }        Print (data)

  

Reference Address: http://scrapy-chs.readthedocs.io/zh_CN/0.24/topics/link-extractors.html

Python crawler scrapy Basic use of rules

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.