Multi-thread web crawler using python

Source: Internet
Author: User
Python supports multithreading, mainly through the thread and threading modules. This article mainly shares with you how to implement multi-threaded web crawler in python. For more information, see, there are two ways to use a Thread. One is to create a function to be executed by the Thread, and pass the function into the Thread object for execution. the other is to inherit from the Thread directly, create a new class, and put the code executed by the Thread into this new class.

Implement multi-threaded web crawler and adopt the multi-thread and lock mechanism to implement web crawler with the breadth-first algorithm.

Let me briefly introduce my Implementation ideas:

For a web crawler, if you want to download it in a wide traversal mode, it is like this:

1. Download the first webpage from the given portal URL

2. Extract all new webpage addresses from the first webpage and put them in the download list.

3. Download all new web pages from the download list

4. Find the undownloaded webpage address from all new webpages and update the download list.

5. Repeat Steps 3 and 4 until the updated download list is empty.

The python code is as follows:

#! /Usr/bin/env python # coding = utf-8import threadingimport urllibimport reimport timeg_mutex = threading. condition () g_pages = [] # parse all url links from g_queueURL = [] # list of url links awaiting crawling g_existURL = [] # list of url links that have been crawled g_failedURL = [] # download failed url link list g_totalcount = 0 # Number of downloaded pages class Crawler: def _ init _ (self, crawler name, url, threadnum): self. crawler name = crawler name self. url = url self. threadnum = threadnum self. threadpool = [] self. logfile = fi Le ("log.txt", 'w') def craw (self): global g_queueURL g_queueURL.append (url) depth = 0 print self. crawlername + "start... "while (len (g_queueURL )! = 0): depth + = 1 print 'searching depth ', depth ,'... \ n \ n' self. logfile. write ("URL:" + g_queueURL [0] + "........ ") self. downloadAll () self. updateQueueURL () content = '\ n >>> Depth' + str (depth) + ': \ n' self. logfile. write (content) I = 0 while I
 
  
'+ G_queueURL [I] +' \ n' self. logfile. write (content) I + = 1 def downloadAll (self): global g_queueURL global g_totalcount I = 0 while I
  
   

The above code is a multi-threaded web crawler based on python. I hope you will like it.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.