Spider Crawl Process
Initializes the request with the initial URL and sets the callback function. When the request is downloaded and returned, the response is generated and passed as a parameter to the callback function.
The initial request in Spider is obtained by calling Start_requests (). Start_request () reads the URL in Start_urls and generates the request with parse as the callback function.
Analyzes the returned (Web page) content within the callback function, returns the Item object or Request, or an iterative container that includes both. The returned Request object is scrapy processed, downloads the appropriate content, and invokes the set callback function (the function can be the same).
Within the callback function, you can use selectors (Selector, BeautifulSoup, lxml, etc.) to analyze the content of the Web page and generate the item based on the parsed data.
Finally, the item returned by Spider is saved to the database (processed by some item pipeline) or stored in a file using the Feed exports.
Examples of spider
The code is as follows |
Copy Code |
Import Scrapy From Myproject.items Import myitem Class Myspider (Scrapy. Spider): ''' Returns multiple Request objects and item in a callback function ''' Name = ' example.com ' Allowed_domains = [' example.com '] Start_urls = [ ' Http://www.example.com/1.html ', ' Http://www.example.com/2.html ', ' Http://www.example.com/3.html ', ] Def parse (self, Response): sel = scrapy. Selector (response) For H3 in Response.xpath ('//h3 '). Extract (): Yield myitem (TITLE=H3) For URL in Response.xpath ('//a/@href '). Extract (): Yield scrapy. Request (URL, callback=self.parse) |
Examples of Crawlspider
The code is as follows |
Copy Code |
Import Scrapy From scrapy.contrib.spiders import crawlspider, rule From scrapy.contrib.linkextractors import Linkextractor
Class Myspider (Crawlspider): Name = ' example.com ' Allowed_domains = [' example.com '] Start_urls = [' http://www.example.com '] # in the following rule, the first one represents a matching category.php but does not match subsection.php (no callback means that follow is true to indicate a follow-up link)
# in the following rule, the second expression represents the matching item.php and is analyzed using the spider Parse_item method.
Rules = ( Rule (linkextractor allow= (' category\.php ',), deny= (' subsection\.php ',)), Rule (linkextractor allow= (' item\.php ',)), callback= ' Parse_item '), )
def parse_item (self, Response): Self.log (' Hi, this is a item page!%s '% response.url)
item = scrapy. Item () item[' id ' = Response.xpath ('//td[@id = ' item_id ']/text () '). Re (r ' ID: (\d+) ') item[' name ' = Response.xpath ('//td[@id = ' item_name ']/text () '). Extract () item[' description ' = Response.xpath ('//td[@id = ' item_description ']/text () '). Extract () Return item |