Learning Scrapy notes (6)-Scrapy processes JSON APIs and AJAX pages, scrapyjson
Abstract: This article introduces how to use Scrapy to process JSON APIs and AJAX pages.
Sometimes, you will find that the page you want to crawl does not have the HTML source code. For example, open http: // localhost: 9312/static/in the browser and right-click the blank area, select view web page Source Code as follows:
Start_urls = ('HTTP: // web: 9312/properties/api. json ',)
If you need to log on before obtaining the json api, use the start_request () function (refer to Learning Scrapy note (5)-Scrapy logon website).
- Modify the parse function
Def parse (self, response): base_url = "http: // web: 9312/properties/" js = json. loads (response. body) for item in js: id = item ["id"] url = base_url + "property_000006d.html" % id # construct a complete url yield Request (url, callback = self. parse_item)
The preceding js variable is a list, and each element represents an entry. You can use the scrapy shell tool to verify it:
scrapy shell http://web:9312/properties/api.json
Title = item ["title"] yield Request (url, meta = {"title": title}, callback = self. parse_item) # The meta variable is a dictionary used to pass data to the callback function.
In the parse_item function, you can extract this field from response.
l.add_value('title', response.meta['title'], MapCompose(unicode.strip, unicode.title))