This article mainly introduces how Python uses scrapy to capture sitemap information of a website. it involves the use of the Python framework scrapy and has some reference value, for more information about how to use scrapy to capture sitemap information, see the following example. Share it with you for your reference. The details are as follows:
import refrom scrapy.spider import BaseSpiderfrom scrapy import logfrom scrapy.utils.response import body_or_strfrom scrapy.http import Requestfrom scrapy.selector import HtmlXPathSelectorclass SitemapSpider(BaseSpider): name = "SitemapSpider" start_urls = ["http://www.domain.com/sitemap.xml"] def parse(self, response): nodename = 'loc' text = body_or_str(response) r = re.compile(r"(<%s[\s>])(.*?)(
)"%(nodename,nodename),re.DOTALL) for match in r.finditer(text): url = match.group(2) yield Request(url, callback=self.parse_page) def parse_page(self, response): hxs = HtmlXPathSelector(response) #Mock Item blah = Item() #Do all your page parsing and selecting the elemtents you want blash.pText = hxs.select('//p/text()').extract()[0] yield blah
I hope this article will help you with Python programming.