Scrapy和MongoDB的應用---爬取

來源:互聯網
上載者:User

標籤:

  Scrapy是Python開發的一個快速、高層次的螢幕抓取和web抓取架構,用於抓取Web網站並從頁面中提取結構化的資料.它最迷人的地方在於任何人都可以根據需求方便的修改。
  MongoDB是現下非常流行的開源的非關係型資料庫(NoSql),它是以“key-value”的形式儲存資料的,在大資料量、高並發、弱事務方面都有很大的優勢。
  當Scrapy與MongoDB兩者相碰撞會產生怎樣的火花呢?與MongoDB兩者相碰撞會產生怎樣的火花呢?現在讓我們做一個簡單的爬取小說的TEST

   1.安裝Scrapy
        pip install scrapy

   2.下載安裝MongoDB和MongoVUE可視化
        [MongoDB](https://www.mongodb.org/)
        下載安裝的步驟略過,在bin目錄下建立一個data檔案夾用來存放資料的。


        [MongoVUE](http://www.mongovue.com/)

   安裝完成後我們需要建立一個資料庫。

  

   3.建立一個Scrapy項目
        scrapy startproject novelspider
    目錄結構:其中的novspider.py是需要我們手動建立的(contrloDB不需要理會)

  

  4.編寫代碼

    目標網站:http://www.daomubiji.com/

    

  settings.py

BOT_NAME = ‘novelspider‘SPIDER_MODULES = [‘novelspider.spiders‘]NEWSPIDER_MODULE = ‘novelspider.spiders‘ITEM_PIPELINES = [‘novelspider.pipelines.NovelspiderPipeline‘]  #匯入pipelines.py中的方法USER_AGENT = ‘Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0‘COOKIES_ENABLED = TrueMONGODB_HOST = ‘127.0.0.1‘   MONGODB_PORT = 27017MONGODB_DBNAME = ‘zzl‘    #資料庫名MONGODB_DOCNAME = ‘Book‘   #表名

  pipelines.py

from scrapy.conf import settingsimport pymongoclass NovelspiderPipeline(object):    def __init__(self):        host = settings[‘MONGODB_HOST‘]        port = settings[‘MONGODB_PORT‘]        dbName = settings[‘MONGODB_DBNAME‘]        client = pymongo.MongoClient(host=host, port=port)        tdb = client[dbName]        self.post = tdb[settings[‘MONGODB_DOCNAME‘]]    def process_item(self, item, spider):        bookInfo = dict(item)        self.post.insert(bookInfo)        return item

  items.py

from scrapy import Item,Fieldclass NovelspiderItem(Item):    # define the fields for your item here like:    # name = scrapy.Field()    bookName = Field()    bookTitle = Field()    chapterNum = Field()    chapterName = Field()    chapterURL = Field()

  在spiders目錄下建立novspider.py

from scrapy.spiders import CrawlSpiderfrom scrapy.selector import Selectorfrom novelspider.items import NovelspiderItemclass novSpider(CrawlSpider):    name = "novspider"    redis_key = ‘novspider:start_urls‘    start_urls = [‘http://www.daomubiji.com/‘]    def parse(self,response):        selector = Selector(response)        table = selector.xpath(‘//table‘)        for each in table:            bookName = each.xpath(‘tr/td[@colspan="3"]/center/h2/text()‘).extract()[0]            content = each.xpath(‘tr/td/a/text()‘).extract()            url = each.xpath(‘tr/td/a/@href‘).extract()            for i in range(len(url)):                item = NovelspiderItem()                item[‘bookName‘] = bookName                item[‘chapterURL‘] = url[i]                try:                    item[‘bookTitle‘] = content[i].split(‘ ‘)[0]                    item[‘chapterNum‘] = content[i].split(‘ ‘)[1]                except Exception,e:                    continue                try:                    item[‘chapterName‘] = content[i].split(‘ ‘)[2]                except Exception,e:                    item[‘chapterName‘] = content[i].split(‘ ‘)[1][-3:]                yield item

  5.啟動項目命令: scrapy crawl novspider.  

     抓取結果

  

  

Scrapy和MongoDB的應用---爬取

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.