The recent website from HTTPS to HTTP, replaced the URL, the old URL did a 301 redirect, toss a bit big, so in the Baidu Webmaster platform submission URL, whether it is the active push back is manual submission, the premise is to organize the site links, manually add too troublesome, inefficient, So I want to write a script directly to crawl the whole station link and export, this article and everyone to share how to use Python3 to implement crawl link export.
First site to have the site map sitemap.xml file address, second I use here is the Python3 version, if your environment is python2, need to adjust the code, because Python2 and python3 many places difference is quite big.
The following is the Python 3 code, the inside of the link address for your own URL can be:
#coding =utf-8import urllibimport urllib.request import reurl= ' Http://www.ranzhi.org/sitemap.xml ' html= Urllib.request.urlopen (URL). Read () html=html.decode (' Utf-8 ') R=re.compile (R ' (http://www.ranzhi.org.*?\.html) ') Big=re.findall (r,html) for I in big: print (i) op_xml_txt=open (' xml.txt ', ' a ') op_xml_txt.write ('%s\n '%i )
We can look at the results of the operation:
Export txt format file, and then in the Baidu Webmaster platform manually submitted on the more convenient. Of course we can also use a faster proactive push mode, because my website is developed with Php+mysql, so we use PHP script here to handle the above crawl, and then actively push to Baidu, once again to speed up the crawler crawl time.
Above 1 is your site's active push API, this can be obtained in the Baidu webmaster platform, 2 is to actively push the site address, where we can use to crawl the entire site link. Put the link address into the array, run a PHP script, you can. One-click submission, and efficient and convenient, but also to shorten the crawler crawl time, to help the Site page included.
We in the usual SEO or server operations, often will be repetitive work automation, complex workshop changes, to help improve efficiency, if you have to operate over the charge of the problem can be shared with the discussion.
Python3 Analysis Sitemap.xml Crawl export full site link