Instance automatically crawled by scrapy crawler

Source: Internet
Author: User
Tags php and xpath

Spider Crawl Process

Initializes the request with the initial URL and sets the callback function. When the request is downloaded and returned, the response is generated and passed as a parameter to the callback function.
The initial request in Spider is obtained by calling Start_requests (). Start_request () reads the URL in Start_urls and generates the request with parse as the callback function.
Analyzes the returned (Web page) content within the callback function, returns the Item object or Request, or an iterative container that includes both. The returned Request object is scrapy processed, downloads the appropriate content, and invokes the set callback function (the function can be the same).
Within the callback function, you can use selectors (Selector, BeautifulSoup, lxml, etc.) to analyze the content of the Web page and generate the item based on the parsed data.
Finally, the item returned by Spider is saved to the database (processed by some item pipeline) or stored in a file using the Feed exports.
Examples of spider

The code is as follows Copy Code

Import Scrapy
From Myproject.items Import myitem

Class Myspider (Scrapy. Spider):
'''
Returns multiple Request objects and item in a callback function
'''
Name = ' example.com '
Allowed_domains = [' example.com ']
Start_urls = [
' Http://www.example.com/1.html ',
' Http://www.example.com/2.html ',
' Http://www.example.com/3.html ',
]

Def parse (self, Response):
sel = scrapy. Selector (response)
For H3 in Response.xpath ('//h3 '). Extract ():
Yield myitem (TITLE=H3)

For URL in Response.xpath ('//a/@href '). Extract ():
Yield scrapy. Request (URL, callback=self.parse)

Examples of Crawlspider

The code is as follows Copy Code

Import Scrapy
From scrapy.contrib.spiders import crawlspider, rule
From scrapy.contrib.linkextractors import Linkextractor

Class Myspider (Crawlspider):
Name = ' example.com '
Allowed_domains = [' example.com ']
Start_urls = [' http://www.example.com ']

# in the following rule, the first one represents a matching category.php but does not match subsection.php (no callback means that follow is true to indicate a follow-up link)

# in the following rule, the second expression represents the matching item.php and is analyzed using the spider Parse_item method.

Rules = (
Rule (linkextractor allow= (' category\.php ',), deny= (' subsection\.php ',)),
Rule (linkextractor allow= (' item\.php ',)), callback= ' Parse_item '),
)

def parse_item (self, Response):
Self.log (' Hi, this is a item page!%s '% response.url)

item = scrapy. Item ()
item[' id ' = Response.xpath ('//td[@id = ' item_id ']/text () '). Re (r ' ID: (\d+) ')
item[' name ' = Response.xpath ('//td[@id = ' item_name ']/text () '). Extract ()
item[' description ' = Response.xpath ('//td[@id = ' item_description ']/text () '). Extract ()
Return item

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.