標籤:scrapy mysql
建立Scrapy工程:
scrapy startproject weather2
定義Items(items.py):
import scrapyclass Weather2Item(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() weatherDate = scrapy.Field() weatherDate2 = scrapy.Field() weatherWea = scrapy.Field() weatherTem1 = scrapy.Field() weatherTem2 = scrapy.Field() weatherWin = scrapy.Field()
編寫Spider(spiders/weatherSpider.py):
import scrapyfrom weather2.items import Weather2Item class CatchWeatherSpider(scrapy.Spider): name = ‘CatchWeather2‘ allowed_domains = [‘weather.com.cn‘] start_urls = [ "http://www.weather.com.cn/weather/101280101.shtml" ] def parse(self, response): for sel in response.xpath(‘//*[@id="7d"]/ul/li‘): item = Weather2Item() item[‘weatherDate‘] = sel.xpath(‘h1/text()‘).extract() item[‘weatherDate2‘] = sel.xpath(‘h2/text()‘).extract() item[‘weatherWea‘] = sel.xpath(‘p[@class="wea"]/text()‘).extract() item[‘weatherTem1‘] = sel.xpath(‘p[@class="tem tem1"]/span/text()‘).extract() + sel.xpath(‘p[@class="tem tem1"]/i/text()‘).extract() item[‘weatherTem2‘] = sel.xpath(‘p[@class="tem tem2"]/span/text()‘).extract() + sel.xpath(‘p[@class="tem tem2"]/i/text()‘).extract() item[‘weatherWin‘] = sel.xpath(‘p[@class="win"]/i/text()‘).extract() yield item
資料來源是http://www.weather.com.cn/weather/101280101.shtml,101280101是廣州的城市編號
這裡用到了xpath分析html,感覺好簡單
測試回合:
scrapy crawl CatchWeather2
結果片斷:
650) this.width=650;" src="http://s3.51cto.com/wyfs02/M02/75/CE/wKioL1ZC8wyDfhAQAACOOLnoGMI038.png" title="1.png" alt="wKioL1ZC8wyDfhAQAACOOLnoGMI038.png" />
已經拿到我們想要的資料
建立資料庫:
CREATE TABLE `yunweiApp_weather` ( `id` int(11) NOT NULL AUTO_INCREMENT, `weatherDate` varchar(10) DEFAULT NULL, `weatherDate2` varchar(10) NOT NULL, `weatherWea` varchar(10) NOT NULL, `weatherTem1` varchar(10) NOT NULL, `weatherTem2` varchar(10) NOT NULL, `weatherWin` varchar(10) NOT NULL, `updateTime` datetime NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=utf8;
建立PipeLines():
import MySQLdbimport datetimeDEBUG = Trueif DEBUG: dbuser = ‘lihuipeng‘ dbpass = ‘lihuipeng‘ dbname = ‘game_main‘ dbhost = ‘192.168.1.100‘ dbport = ‘3306‘else: dbuser = ‘root‘ dbpass = ‘lihuipeng‘ dbname = ‘game_main‘ dbhost = ‘127.0.0.1‘ dbport = ‘3306‘ class MySQLStorePipeline(object): def __init__(self): self.conn = MySQLdb.connect(user=dbuser, passwd=dbpass, db=dbname, host=dbhost, charset="utf8", use_unicode=True) self.cursor = self.conn.cursor() #清空表: self.cursor.execute("truncate table yunweiApp_weather;") self.conn.commit() def process_item(self, item, spider): curTime = datetime.datetime.now() try: self.cursor.execute("""INSERT INTO yunweiApp_weather (weatherDate, weatherDate2, weatherWea, weatherTem1, weatherTem2, weatherWin, updateTime) VALUES (%s, %s, %s, %s, %s, %s, %s)""", ( item[‘weatherDate‘][0].encode(‘utf-8‘), item[‘weatherDate2‘][0].encode(‘utf-8‘), item[‘weatherWea‘][0].encode(‘utf-8‘), item[‘weatherTem1‘][0].encode(‘utf-8‘), item[‘weatherTem2‘][0].encode(‘utf-8‘), item[‘weatherWin‘][0].encode(‘utf-8‘), curTime, ) ) self.conn.commit() except MySQLdb.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) return item
修改setting.py啟用pipelines:
ITEM_PIPELINES = { #‘weather2.pipelines.Weather2Pipeline‘: 300, ‘weather2.pipelines.MySQLStorePipeline‘: 400,}
後面的數字只是一個權重,範圍在0-1000內即可
重新測試回合:
scrapy crawl CatchWeather2
結果:
650) this.width=650;" src="http://s2.51cto.com/wyfs02/M01/75/D0/wKiom1ZC8-XgKrBXAABk-4QwIXM949.png" title="2.png" alt="wKiom1ZC8-XgKrBXAABk-4QwIXM949.png" />結合營運後台隨便展示一下:
650) this.width=650;" src="http://s5.51cto.com/wyfs02/M02/75/D0/wKiom1ZC9DqRrKIfAACjPeFkvqw919.png" title="3.png" alt="wKiom1ZC9DqRrKIfAACjPeFkvqw919.png" />
搞完收工~~
本文出自 “營運筆記” 部落格,請務必保留此出處http://lihuipeng.blog.51cto.com/3064864/1711852
Scrapy結合Mysql爬取天氣預報入庫