Multi-thread web crawler based on python and multi-thread python

Source: Internet
Author: User

Multi-thread web crawler based on python and multi-thread python

Generally, there are two ways to use a Thread. One is to create a function to be executed by the Thread, and pass the function into the Thread object for execution. the other is to inherit from the Thread directly, create a new class, and put the code executed by the Thread into this new class.

Implement multi-threaded web crawler and adopt the multi-thread and lock mechanism to implement web crawler with the breadth-first algorithm.

Let me briefly introduce my Implementation ideas:

For a web crawler, if you want to download it in a wide traversal mode, it is like this:

1. Download the first webpage from the given portal URL

2. Extract all new webpage addresses from the first webpage and put them in the download list.

3. Download all new web pages from the download list

4. Find the undownloaded webpage address from all new webpages and update the download list.

5. Repeat Steps 3 and 4 until the updated download list is empty.

The python code is as follows:

#! /Usr/bin/env python # coding = utf-8import threadingimport urllibimport reimport timeg_mutex = threading. condition () g_pages = [] # parse all url links from g_queueURL = [] # list of url links awaiting crawling g_existURL = [] # list of url links that have been crawled g_failedURL = [] # download failed url link list g_totalcount = 0 # Number of downloaded pages class Crawler: def _ init _ (self, crawler name, url, threadnum): self. crawler name = crawler name self. url = url self. threadnum = threadnum self. threadpool = [] self. logfile = fi Le ("log.txt", 'w') def craw (self): global g_queueURL g_queueURL.append (url) depth = 0 print self. crawlername + "start... "while (len (g_queueURL )! = 0): depth + = 1 print 'searching depth ', depth ,'... \ n \ n' self. logfile. write ("URL:" + g_queueURL [0] + "........ ") self. downloadAll () self. updateQueueURL () content = '\ n >>> Depth' + str (depth) + ': \ n' self. logfile. write (content) I = 0 while I <len (g_queueURL): content = str (g_totalcount + I) + '->' + g_queueURL [I] + '\ n' self. logfile. write (content) I + = 1 def downloadAll (self): global g_queueURL global g_totalcount I = 0 while I <Len (g_queueURL): j = 0 while j <self. threadnum and I + j <len (g_queueURL): g_totalcount + = 1 then ', j) if threadresult! = None: print 'thread started: ', I + j,' -- File number = ', g_totalcount j + = 1 I + = j for Thread in self. threadpool: thread. join (30) threadpool = [] g_queueURL = [] def download (self, url, filename, tid): crawler thread = crawler thread (url, filename, tid) self. threadpool. append (crawler thread) crawler thread. start () def updateQueueURL (self): global g_queueURL global g_existURL newUrlList = [] for content in g_pages: newUrlList + = self. GetUrl (content) g_queueURL = list (set (newUrlList)-set (g_existURL) def getUrl (self, content): reg = R' "(http ://. + ?) "'Regob = re. compile (reg, re. DOTALL) urllist = regob. findall (content) return urllistclass crawler thread (threading. thread): def _ init _ (self, url, filename, tid): threading. thread. _ init _ (self) self. url = url self. filename = filename self. tid = tid def run (self): global g_mutex global g_failedURL global g_queueURL try: page = urllib. urlopen (self. url) html = page. read () fout = file (self. filename, 'w') fout. write (html) fout. close () Cancel t Exception, e: g_mutex.acquire () g_existURL.append (self. url) g_failedURL.append (self. url) g_mutex.release () print 'failed' downloading and saving', self. url print e return None g_mutex.acquire () g_pages.append (html) g_existURL.append (self. url) g_mutex.release () if _ name __= = "_ main _": url = raw_input ("Enter url entry: \ n ") threadnum = int (raw_input ("set thread count:") crawler name = "" Crawler = crawler (crawler name, url, threadnum) Crawler. craw ()

The above code is a multi-threaded web crawler based on python. I hope you will like it.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.