Crawler 7:scrapy-Crawl Web page

Source: Internet
Author: User

Using Scrapy as a reptile is four steps.

    • New Project (Project): Create a new crawler project
    • Clear goals (Items): Identify the target you want to crawl
    • Spider: Making crawlers start crawling Web pages
    • Storage content (Pipeline): Design Pipeline Store crawl content

The previous section created the project and then crawled the page with the last project created

Many of the online tutorials are dmoz.org this site to do experiments, so I also use this to do the experiment

Clear Goals

In Scrapy, items is the container used to load the crawled content

The content we want is

    • Names (name)
    • Link (URL)
    • Description (description)

In the Tutorial directory there will be items.py files, add our code after the default code

#-*-coding:utf-8-*-#Define Here the models for your scraped items##See documentation in:#http://doc.scrapy.org/en/latest/topics/items.html fromScrapy.itemImportItem, FieldImportscrapyclassTutorialitem (scrapy. Item):#Define the fields for your item here is like:    #name = Scrapy. Field ()
#下面是我自己加的classDmozitem (Item): Title=Field () Link=Field () desc= Field ()

Making Crawlers

Reptiles or the usual, crawl and then fetch. This means getting the entire page content and then taking out the parts you need.

Create a Python file under the Tutorial\spiders directory named dmoz_spider.py

The current code is as follows

 fromScrapy.spidersImportSpiderclassDmozspider (Spider): Name="DMOZ"Allowed_domains= ["dmoz.org"] Start_urls= [         "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",         "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"    ]    defParse (self,response): filename=response.url.split ("/") [-2] Open (filename,'WB'). Write (Response.body)

Name is the name of the reptile and must be unique

Allowed_domains is a crawl limit, meaning that only the content under that domain name is crawled

Start_urls is the list of URLs crawled, and the child Urll will inherit from these starting URLs

The parse can probably be understood as the pretreatment of response.

Well, the crawlers are written, and then run, open the cmd window in the Tutorial directory

Input

Scrapy Crawl DMOZ

Oh, no, it's an error.

Niang degrees, is because there is no WIN32API this module

I'm using python2.7 32 bit, so download Pywin32-219.win32-py2.7.exe this file

Remember not to make a mistake, 32-bit and 64-bit, downloaded into 64 will continue to report

DLL load failed:1% is not a valid Win32

Error

Once the environment is configured, re-run

Success

Tutorial directory has more book and resources two files, which is crawled down the file

Crawler 7:scrapy-Crawl Web page

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.