Multi-threaded web crawler python implementation

Source: Internet
Author: User

Using multi-thread and lock mechanism, the web crawler of breadth-first algorithm is realized.

For a web crawler, if you want to download by the breadth of the way, it is working like this:
1. Download the first page from a given portal URL
2. Extract all new page addresses from the first page and put them in the download list
3. Press the address in the download list to download all new pages
4. Find the address of a webpage that has not been downloaded from all new pages and update the download list
5. Repeat 3, 42 steps until the updated download list is empty table when stopped

The Python implementation code is as follows:

#!/usr/bin/Env python#coding=utf-8ImportThreadingImportUrllibImportReImportTimeg_mutex=Threading. Condition () g_pages=[] #从中解析所有url链接g_queueURL=[] #等待爬取的url链接列表g_existURL=[] #已经爬取过的url链接列表g_failedURL=[] #下载失败的url链接列表g_totalcount=0#下载过的页面数classcrawler:def __init__ (self,crawlername,url,threadnum): Self.crawlername=crawlername Self.url=URL Self.threadnum=threadnum Self.threadpool=[] Self.logfile=file ("Log.txt", ' W ') def craw (self): global g_queueurl g_queueurl.append (URL) Depth=0Print Self.crawlername+ "Start ..." while(Len (G_queueurl)!=0): Depth+=1Print' Searching depth ', depth, ' ... \ n 'Self.logfile.write ("URL:" +g_queueurl[0]+ "...") Self.downloadall () Self.updatequeueurl () content= ' \n>>>depth ' +str (Depth) + ': \ n 'self.logfile.write (content) I=0 whilei<Len (g_queueurl): Content=str (g_totalcount+i) + ' +g_queueurl[i]+ ' \ n 'self.logfile.write (content) I+=1def downloadall (self): global G_queueurl Global G_totalcount i=0 whilei<Len (g_queueurl): J=0 whileJ<self.threadnum and I+j <Len (g_queueurl): G_totalcount+=1Threadresult=self.download (G_queueurl[i+j],str (g_totalcount) + '. html ', J)ifthreadresult!=None:print' Thread started: ', i+j, '--file number = ', G_totalcount J+=1I+=J forthread in Self.threadpool:thread.join (30) ThreadPool=[] G_queueurl=[] def download (self,url,filename,tid): Crawthread=Crawlerthread (Url,filename,tid) self.threadpool.append (crawthread) Crawthread.start () def updatequeue URL (self): global G_queueurl Global G_existurl newurllist=[]         forcontent in G_pages:newurllist+=Self.geturl (content) G_queueurl=list (Set (newurllist)-set (G_existurl)) def getUrl (self,content): Reg=r ' "(http://.+?)" 'Regob=Re.compile (reg,re. Dotall) Urllist=regob.findall (content)returnurllistclassCrawlerthread (Threading. Thread): Def __init__ (Self,url,filename,tid): Threading. Thread.__init__ (self) self.url=URL Self.filename=filename Self.tid=tid def run (self): Global G_mutex Global G_failedurl global G_queueurlTry: Page=Urllib.urlopen (self.url) HTML=page.read () fout=file (Self.filename, ' W ') fout.write (HTML) fout.close () except Exception,e:g_mutex.acquire () G_existurl.append (Self.url) g_failedurl.append (Self.url) g_mutex.release () print' Failed downloading and saving ', Self.url print ereturnNone G_mutex.acquire () g_pages.append (HTML) g_existurl.append (Self.url) g_mutex.release () if__name__== "__main__": URL=raw_input ("Please enter URL entry: \ n") Threadnum=int(Raw_input ("Set Number of Threads:")) Crawlername= "Little Reptile"crawler=Crawler (Crawlername,url,threadnum) Crawler.craw ()

Multi-threaded web crawler python implementation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.