scrapy爬取西刺網站ip

來源:互聯網
上載者:User

標籤:collect   imp   domain   parse   raw   pid   init   import   pip   

# scrapy爬取西刺網站ip# -*- coding: utf-8 -*-import scrapyfrom xici.items import XiciItemclass XicispiderSpider(scrapy.Spider):    name = "xicispider"    allowed_domains = ["www.xicidaili.com/nn"]    start_urls = [‘http://www.xicidaili.com/nn/‘]    def parse(self, response):        item = XiciItem()        for each in response.css(‘#ip_list tr‘):            ip = each.css(‘td:nth-child(2)::text‘).extract_first()            port = each.css(‘td:nth-child(3)::text‘).extract_first()            if ip:                ip_port = ip + ‘:‘ + port                item[‘ip_port‘] = ip_port                yield item
import pymongoclass XiciPipeline(object):    collection_name = ‘scrapy_items‘    def __init__(self, mongo_uri, mongo_db):        self.mongo_uri = mongo_uri        self.mongo_db = mongo_db    #這裡的from經常拼錯啊    @classmethod    def from_crawler(cls, crawler):        return cls(            mongo_uri=crawler.settings.get(‘MONGO_URI‘),            mongo_db=crawler.settings.get(‘MONGO_DB‘)        )    def open_spider(self, spider):        self.client = pymongo.MongoClient(self.mongo_uri)        self.db = self.client[self.mongo_db]    def close_spider(self, spider):        self.client.close()    def process_item(self, item, spider):        self.db[self.collection_name].insert(dict(item))        return item

 

scrapy爬取西刺網站ip

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.