Python Development Automatic website link active submission Baidu tools

Source: Internet
Author: User

Their website because of the data more, while the night is nothing to write a crawl URL through Python automatically submitted to Baidu, the implementation of the site to submit the idea, the code implementation is very simple, because the writing time storage, inevitably some bugs, can be placed on the server configuration under the scheduled crawl submission.

Import OS import re import shutil Reject_filetype = ' Rar,7z,css,js,jpg,jpeg,gif,bmp,png,swf,exe ' #定义爬虫过程中不下载的文件类型 de   F GetInfo (webaddress): # ' #通过用户输入的网址连接上网络协议, get URL I here is my own domain name global reject_filetype url = ' http://' +webaddress+ '/' #网址的url地址 print ' getting>>>>> ' +url websitefilepath = Os.path.abspath ('. ')   + '/' +webaddress #通过函数os. Path.abspath get the absolute path of the current program, and then use the URL entered by the user to get the folder where the downloaded pages are stored if os.path.exists (websitefilepath): #如果此文件夹已经存在就将其删除, the reason is that if it exists, then the crawler will not succeed Shutil.rmtree (Websitefilepath) #shutil. The Rmtree function is used to delete a folder (which contains files) Outputf Ilepath = Os.path.abspath ('. ') + '/' + ' output.txt ' #在当前文件夹下创建一个过渡性质的文件output. txt fobj = open (Outputfilepath, ' w+ ') command = ' Wget-r-m-nv--r eject= ' +reject_filetype+ '-o ' +outputfilepath+ ' +url #利用wget命令爬取网站 tmp0 = os.popen (command). ReadLines () #函数os. Popen Line command and store the result of the run in the variable tmp0 print >> fobj,tmp0 #写入output. txt allinfo = fobj.read () Target_url = re.compile (R ' \ ". *?\" ', Re. Dotall).FindAll (allinfo) #通过正则表达式筛选出得到的网址 print Target_url target_num = Len (target_url) fobj1 = open (' Result.txt ', ' W ') #在本目录下创建一个result. txt file, which stores the resulting content for I in range (target_num): If Len (Target_url[i][1:-1]) <70: # this T            Arget_url is a dictionary form, if the URL length is greater than 70, it will not be recorded inside the print >> fobj1,target_url[i][1:-1] #写入到文件中 else: Print "NO" Fobj.close () fobj1.close () if Os.path.exists (Outputfilepath): #将过渡文件output. txt delete o S.remove (Outputfilepath) #删除 if __name__== "__main__": webaddress = raw_input ("Input the Website Address (without \") Http:\ ") >") getinfo (webaddress) print "Well done."

And then enter Baidu actively submit columns, find API interface, submit the next data can be

Python Development Automatic website link active submission Baidu tools

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.