Scrapy Crawler instance captures the Watercress group information and saves it to MongoDB

Source: Internet
Author: User

This frame has been watching for a long time, but until recently it was empty and I looked carefully at the scrapy0.24 version.

First come to a finished product to feel the convenience of this framework, and so the time to slowly organize your thoughts and then the recent learning about this framework of knowledge one by one updated to the blog.


Let's explain the purpose of this toy crawler.

Ability to crawl groups in the seed URL page and analyze associated group connections and group membership and group name information

The data is probably like this.

{     ' relativegroups ':  [u ' http://www.douban.com/group/10127/',                          u ' http://www.douban.com/group/seventy/',                         u '/HTTP www.douban.com/group/lovemuseum/',                         u ' http://www.douban.com/group/486087/' ,                         u ' http://www.douban.com/group/lovesh/',                          u ' Http://www.douban.com/group/Noastrology/',                         u ' http://www.douban.com/group/shanghaijianzhi/',                          u ' http://www.douban.com/group/12658/',                         u '/HTTP www.douban.com/group/shanghaizufang/',                         u ' http://www.douban.com/group/gogo/',                          u ' http://www.douban.com/group/117546/',                          u ' http://www.douban.com/ group/159755/'],      ' groupName ':  u ' \u4e0a\u6d77\u8c46\u74e3 ',       ' Groupurl ':  ' http://www.douban.com/group/Shanghai/',      ' Totalnumber ':  u ' 209957 '}

What is the use of this data can analyze the relationship between the group and the group, and so on, if the intention to crawl more information. Do not expand this article is mainly to be able to quickly feel a piece.


The first is start a new project called Douban.

# scrapy Startproject Douban

# CD Douban

This is the complete catalog of the entire project [email protected]:~/student/py/douban$ tree.├──douban│├──__init__.py│├──__init__.pyc│├──items.py │├──items.pyc│├──pipelines.py│├──pipelines.pyc│├──settings.py│├──settings.pyc│└──spiders│├─ ─basicgroupspider.py│├──basicgroupspider.pyc│├──__init__.py│└──__init__.pyc├──nohup.out├──scrap Y.cfg├──start.sh├──stop.sh└──test.log


Write entity items.py, mainly to get back the data can be very easy to persist

[Email protected]:~/student/py/douban$ cat douban/items.py#-*-coding:utf-8-*-# Define Here the models for your scraped items## See documentation in:# http://doc.scrapy.org/en/latest/topics/items.htmlfrom Scrapy.item Import Item, Field Class Doubanitem (item): # define the field for your Item this is: # name = Field () GroupName = Fields () Grou PURL = Field () Totalnumber = field () relativegroups = field () Activeuesrs = field ()


Write a crawler and customize some rules for data processing

[email protected]:~/student/py/douban$ cat douban/spiders/basicgroupspider.py# -*-  coding: utf-8 -*-from scrapy.contrib.spiders import crawlspider, rulefrom  Scrapy.contrib.linkextractors.sgml import sgmllinkextractorfrom scrapy.selector import  HtmlXPathSelectorfrom scrapy.item import Itemfrom douban.items import  Doubanitemimport reclass groupspider (crawlspider):    #  reptile name      name =  "Group"         allowed_domains =  ["douban.com"]    #  seed link     start_urls = [          "Http://www.douban.com/group/explore?tag=%E8%B4%AD%E7%89%A9",          "Http://www.douban.com/group/explore?tag=%E7%94%9F%E6%B4%BB",          "Http://www.douban.com/group/explore?tag=%E7%A4%BE%E4%BC%9A",          "Http://www.douban.com/group/explore?tag=%E8%89%BA%E6%9C%AF",          "Http://www.douban.com/group/explore?tag=%E5%AD%A6%E6%9C%AF",          "Http://www.douban.com/group/explore?tag=%E6%83%85%E6%84%9F",          "Http://www.douban.com/group/explore?tag=%E9%97%B2%E8%81%8A",          "Http://www.douban.com/group/explore?tag=%E5%85%B4%E8%B6%A3"     ]       #  rules   fulfillment   processing with functions specified by callback          rules = [        rule (SgmlLinkExtractor (allow= ('/group /[^/]+/$ ', ), callback= ' parse_group_home_page ',  process_request= ' Add_cookie '),          rule (Sgmllinkextractor (allow= ('/group/explore\?tag ', )),  follow=true,process_request= ' add_ Cookie '),     ]     def __get_id_from_group_url (self,  URL):         m =  re.search ("^http://www.douban.com/ group/([^/]+)/$ ",  url)         if (m):             return m.group (1)           else:            return 0      def add_cookie (self, request):         request.replace (cookies=[         ]);         return request;     def parse_group_ Topic_list (self, response):  &NBsp;      self.log ("fetch group topic list page: %s"  % response.url)         pass       def parse_group_home_page (Self, response):          self.log ("fetch group home page: %s"  % response.url)                   #  used here is a call   xpath  Selector         hxs = htmlxpathselector (response)         item = doubanitem ()            #get  group name        item[' GroupName '] = hxs.select ('//h1/text () '). Re ("^\s+ (. *) \s+$") [0]           #get  GROUP ID&NBsp;        item[' Groupurl '] = response.url         groupid = self.__get_id_from_group_url (Response.url)            #get  group members number         members_url =  "Http://www.douban.com/group/%s/members"  % groupid         members_text = hxs.select ('//a[contains (@href,  "%s")] /text () '  % members_url '). Re ("\ ((\d+) \)")         item[' Totalnumber '] = members_text[0]         #get  relative  groups        item[' relativegroups '] = []         groups = hxs.select ('//div[contains (@class,  ") Group-list-item ")]         for group in groups:             url = group.select (' Div[contains (@class,  "title")]/a/@href '). Extract () [ 0]            item[' RelativeGroups '].append (URL)                 return  Item


To write a pipeline for data processing I'll store the data that the crawler collects in MongoDB.

[email protected]:~/student/py/douban$ cat douban/pipelines.py# -*- coding:  Utf-8 -*-# define your item pipelines here## don ' T forget to  add your pipeline to the item_pipelines setting# see: http:// Doc.scrapy.org/en/latest/topics/item-pipeline.htmlimport pymongofrom scrapy import logfrom  scrapy.conf import settingsfrom scrapy.exceptions import DropItemclass  Doubanpipeline (object):     def __init__ (self):         self.server = settings[' Mongodb_server ']         self.port = settings[' Mongodb_port ']        self.db =  settings[' mongodb_db ']        self.col = settings[' Mongodb_collection ']  &Nbsp;     connection = pymongo. Connection (Self.server, self.port)         db =  Connection[self.db]        self.collection = db[self.col]     def process_item (Self, item, spider):         self.collection.insert (Dict (item))         log.msg (' item written to mongodb database %s/%s '  %  (self.db, self.col), level= Log. Debug, spider=spider)         return item


Set the data processing pipeline used in the settings class and the MongoDB connection parameters and the User-agent Dodge Crawler is banned

[Email protected]:~/student/py/douban$ cat douban/settings.py# -*- coding: utf-8  -*-# scrapy settings for douban project## for simplicity, this  file contains only the most important settings by# default.  All the other settings are documented here:##      http://doc.scrapy.org/en/latest/topics/settings.html#BOT_NAME =  ' Douban ' spider_modules =  [' douban.spiders ']newspider_module =  ' douban.spiders ' #  set wait time to relieve server pressure   and be able to hide yourself download _delay = 2randomize_download_delay = trueuser_agent =  ' Mozilla/5.0  ( Macintosh; intel mac os x 10_8_3)  AppleWebKit/536.5  (khtml, like  Gecko)  chrome/19.0.1084.54 safari/536.5 ' cookies_enabled = true#  configuration uses the data pipeline Item_ pipelines = [' Douban.pipeliNes. Doubanpipeline ']mongodb_server= ' localhost ' mongodb_port=27017mongodb_db= ' douban ' mongodb_collection= ' DoubanGroup ' # crawl responsibly by identifying yourself  (and your website)   on the user-agent#user_agent =  ' douban  (+http://www.yourdomain.com) '


OK, a toy crawler is simply done.

Start the Start command

Nohup scrapy Crawl Group--logfile=test.log &

This article is from "someone who says I am a tech house" blog, please be sure to keep this source http://1992mrwang.blog.51cto.com/3265935/1583539

Scrapy Crawler instance captures the Watercress group information and saves it to MongoDB

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.