[Scrapy] create the first project

Source: Internet
Author: User

1) create a project command:

Scrapy startproject tutorial

This command will create the tutorial folder in the current directory

2) define item

Items are containers that will be loaded with the scraped data; they are declared by creating a scrapy. Item class and defining its attibutes as scrapy. Field objects.

import scrapyclass DmozItem(scrapy.Item):        title=scrapy.Field()        link=scrapy.Field()        desc=scrapy.Field()

3) define spider

To create a spider, you must subclass scrapy. Spider and define the three main mandatory attributes:

Name: identifies the spiders

Start_urls: A list of URLs where the SPIDER will begin to crawl from.

Parse (): A method of the spider, which will be called with the downloaded response object of each start URL. The response is passed to the method as the first and only argument.

The parse () methods is in charge of processing the response and returning scraped data (as item object) and more URLs to follow (as request object ).

import scrapyclass DmozSpider(scrapy.Spider):    name = "dmoz"    allowed_domains = ["dmoz.org"]    start_urls = [        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"    ]    def parse(self, response):        filename = response.url.split("/")[-2]        with open(filename, ‘wb‘) as f:            f.write(response.body)

4) crawling

Command: scrapy crawl dmoz

5) storing the scraped data

Command: scrapy crawl dmoz-O items. JSON

That will generate a items. JSON file containing all scraped items, serialized in JSON

If you want to perform more complex things with the scraped items, you can write an item pipeline.

Certificate ------------------------------------------------------------------------------------------------------------------------------------

Introduction to selectors

There are several ways to extract data from web pages. scrapy uses a mechanic Based on XPath or CSS expressions called scrapy selectors.

You can see selectors as objects that represent nodes in the document structure.

Selectors have four basic methods:

XPath (): returns a list of selectors

CSS (): returns a list of selectors

Extract (): returns a unicode string with the selected data

Re (): returns a list of Unicode strings extracted by applying the regular expression given as argument.

Trying selectors in the shell:

Start a shell:

Scrapy shell http://www.dmoz.org/Computers/Programming/Languages/Python/Books"

After the shell loads, you will have the response fetched in a local response variable, So if you type response. body () You will see the body of the response, or you can type response. headers to see its headers.

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.