1, when crawling data, sometimes encountered by the site, such as IP, response status code of 403, then we hope to be able to throw
The Closespider exception.
2, but as Scrapy official website mentions, scrapy default setting is to filter out the problematic HTTP response (that is, the response status code is not between 200-300). So 403 of the situation will be ignore off, meaning that we are not dealing with this URL request response, directly ignored, that is, timely we use response.status = = 400 Judge no effect, Because only the status is in the 200-300 request will be processed.
3. If we want to capture or process 403, or any other such as 404 or 500, we put 403 in the Spider class in the handle_httpstatus_list. The following is OK.
Class Myspider (Crawlspider):
handle_httpstatus_list = [403]
Or put 403 in the Httperror_allowed_codes setting
That is, add httperror_allowed_codes = [403] in the settings, httperror_allowed_codes default is []
Http://doc.scrapy.org/en/1.0/topics/spider-middleware.html#httperror-allowed-codes
4. After setting up handle_httpstatus_list or httperror_allowed_codes, you can judge Response.Status = = 403 to throw Closespider anomaly and end crawl.