Python supports multithreading, mainly through the thread and threading modules. This article mainly shares with you how to implement multi-threaded web crawler in python. For more information, see, there are two ways to use a Thread. One is to create a function to be executed by the Thread, and pass the function into the Thread object for execution. the other is to inherit from the Thread directly, create a new class, and put the code executed by the Thread into this new class.
Implement multi-threaded web crawler and adopt the multi-thread and lock mechanism to implement web crawler with the breadth-first algorithm.
Let me briefly introduce my Implementation ideas:
For a web crawler, if you want to download it in a wide traversal mode, it is like this:
1. Download the first webpage from the given portal URL
2. Extract all new webpage addresses from the first webpage and put them in the download list.
3. Download all new web pages from the download list
4. Find the undownloaded webpage address from all new webpages and update the download list.
5. Repeat Steps 3 and 4 until the updated download list is empty.
The python code is as follows:
#! /Usr/bin/env python # coding = utf-8import threadingimport urllibimport reimport timeg_mutex = threading. condition () g_pages = [] # parse all url links from g_queueURL = [] # list of url links awaiting crawling g_existURL = [] # list of url links that have been crawled g_failedURL = [] # download failed url link list g_totalcount = 0 # Number of downloaded pages class Crawler: def _ init _ (self, crawler name, url, threadnum): self. crawler name = crawler name self. url = url self. threadnum = threadnum self. threadpool = [] self. logfile = fi Le ("log.txt", 'w') def craw (self): global g_queueURL g_queueURL.append (url) depth = 0 print self. crawlername + "start... "while (len (g_queueURL )! = 0): depth + = 1 print 'searching depth ', depth ,'... \ n \ n' self. logfile. write ("URL:" + g_queueURL [0] + "........ ") self. downloadAll () self. updateQueueURL () content = '\ n >>> Depth' + str (depth) + ': \ n' self. logfile. write (content) I = 0 while I
'+ G_queueURL [I] +' \ n' self. logfile. write (content) I + = 1 def downloadAll (self): global g_queueURL global g_totalcount I = 0 while I
The above code is a multi-threaded web crawler based on python. I hope you will like it.