Description: The Scrapy frame has been used to crawl the tourist data of the cattle net, just begin to practicing, so only climbed four fields for testing, which is the name of the attraction, the location of the attraction, the opening time of the attraction, the description of the scenic spot, and the result of the crawl is JSON format.
Partial data:
Part of the code:
Problems encountered: Start_urls can not dynamically add a URL, this also needs to study, here is simply to crawl all the URLs are thrown into the start_urls inside, this is feasible, but the pretreatment of the Web site is time-consuming. Then there is the processing of encoding, in which the data from the beginning of the scrapy to JSON is always/uxxx type, which needs to be modified in pipeline.py, setting.py, as follows:
In pipelines.py, modify the code as follows:
def __init__ (self):
self.file = Codecs.open (' Items.json ', ' WB ', encoding= ' Utf-8 ')
#
def process_item ( Self, item, spider): Line
= Json.dumps (Dict (item), ensure_ascii=false) + "\ n"
self.file.write (line)
Return Item
#
def spider_closed (self, spider):
self.file.close ()
In settings.py, add the following code:
Item_pipelines = {
' bdlv_spider.pipelines.BdlvSpiderPipeline ':
Where Bdlvspiderpipeline is the class name in pipelines.py.