Scrapy crawler learning and practice projects.

Source: Internet
Author: User

Scrapy crawler learning and practice projects.

As a beginner, first post an example provided by the tutorial you have seen .. The following describes the projects I have completed.

My own project is: Scrapy crawler Project

Project Description:

Crawls a popular fashion webpage project on a website, crawls the content of a specific project twice, Concatenates the crawled content into a new static html, and stores it on its Ftp server, and submit the information to an interface .. (Interface for data operations. Interface part not uploaded)
 
 
 
 
 
 
Example
 

After scrapy crawls a link, how can I continue to crawl the corresponding content of the link? Parse can return the Request list or items list. If the returned Request is a Request, the Request will be placed in the queue to be crawled next time. If the returned items, the corresponding items can be uploaded to pipelines for processing (or saved directly, if the default FEED exporter is used ). If the parse () method returns the next link, how does items return and save it? The Request object accepts the callback parameter to specify the parsing function of the webpage content returned by the Request (in fact, the callback method corresponding to start_urls is the parse method by default). Therefore, you can specify parse to return the Request, then specify another parse_item method to return items:

Taking the Nanjing University bbs as an example:

 

1. spider files:

 

#-*-Coding: UTF-8-*-import chardetfrom scrapy. spider import BaseSpiderfrom scrapy. selector import HtmlXPathSelectorfrom scrapy. utils. url import urljoin_rfcfrom scrapy. http import Requestfrom tutorial. items import bbsItemclass bbsSpider (BaseSpider): name = "boat" allowed_domains = ["bbs.nju.edu.cn"] start_urls = ["http://bbs.nju.edu.cn/bbstop10"] def parseContent (self, content ): content = content [0]. encode ('utf-8') # print chardet. detect (content) # print content authorIndex = content. index ('channel') author = content [11: authorIndex-2] boardIndex = content. index ('title') board = content [authorIndex + 8: boardIndex-2] timeIndex = content. index ('nanjing University Lily station (') time = content [timeIndex + 26: timeIndex + 50] return (author, board, time) # content = content [timeIndex + 58:] # return (author, board, time, content) def parse2 (self, response): hxs = HtmlXPathSelector (response) item = response. meta ['item'] items = [] content = hxs. select ('/html/body/center/table [1]/tr [2]/td/textarea/text ()'). extract () parseTuple = self. parseContent (content) item ['autor'] = parseTuple [0]. decode ('utf-8') item ['board'] = parseTuple [1]. decode ('utf-8') item ['time'] = parseTuple [2] # item ['content'] = parseTuple [3] items. append (item) return items def parse (self, response): hxs = HtmlXPathSelector (response) items = [] title = hxs. select ('/html/body/center/table/tr [position ()> 1]/td [3]/a/text ()'). extract () url = hxs. select ('/html/body/center/table/tr [position ()> 1]/td [3]/a/@ href '). extract () for I in range (0, 10): item = bbsItem () item ['link'] = urljoin_rfc ('HTTP: // bbs.nju.edu.cn /', url [I]) item ['title'] = title [I] [:] items. append (item) for item in items: yield Request (item ['link'], meta = {'item': item}, callback = self. parse2)


2. pipelines file:
 
 
# -*- coding: utf-8 -*- # Define your item pipelines here # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/topics/item-pipeline.html from scrapy import log from twisted.enterprise import adbapi from scrapy.http import Request from scrapy.exceptions import DropItem from scrapy.contrib.pipeline.images import ImagesPipeline import time import MySQLdb import MySQLdb.cursors import socket import select import sys import os import errno class MySQLStorePipeline(object): def __init__(self): self.dbpool = adbapi.ConnectionPool('MySQLdb', db = 'test', user = 'root', passwd = 'root', cursorclass = MySQLdb.cursors.DictCursor, charset = 'utf8', use_unicode = False ) def process_item(self, item, spider): query = self.dbpool.runInteraction(self._conditional_insert, item) return item def _conditional_insert(self, tx, item): tx.execute('insert into info values (%s, %s, %s)', (item['author'], item['board'], item['time']))

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.