Multi-threaded crawler based on Thread+queue

Source: Internet
Author: User
Tags message queue xpath

Thread is a multithreaded class in Python, and we can inherit the thread to use it by passing it a target function or by creating a class of its own. Queue is a message queue in Python, which realizes the sharing of Python thread data, and solves the problem that traditional multithreading needs to lock and unlock shared data, which greatly facilitates our multithreaded programming. Through Thread+queue we can implement a multi-threaded crawler based on the producer + consumer model to crawl an article forum for example, the producer is responsible for extracting the URL of the article and storing it in the queue. The consumer is our multi-threaded crawler, which consists of multiple threads, sharing the data of the queue, mainly responsible for the Get article link from the queue, after each get a URL will tell the queue, the number of tasks minus one, until the queue is empty, quit the thread and end the process!

Producer Extract article URL, because less articles, I choose to extract the article URL of the class in the main thread, when all the URL extracted and put into the queue before starting to start the multi-threaded consumer crawler (that is, parsing the article field of the crawler), the producer code is as follows:

class crawlurls:total_urls = [] # random User-agent headers = {"Userag Ent ": useragent (). Random} def __init__ (self, queue): Self.queue = Queue def run (self): Self.get_ur    LS () print (str (self.queue.qsize ()) + "" + "URLs is put!") def get_urls (self, url= "http://python.jobbole.com/all-posts/"): results = requests.get (URL, headers=self.headers, t        imeout=30) soup = BeautifulSoup (Results.text, "lxml") links = Soup.find_all ("A", class_= "Archive-title") For link in links:link = link.attrs["href"] self.queue.put (link) self.total_urls.app End (link) next_urls = soup.select (' a[class= "next Page-numbers"] ") for Next_url in Next_urls:next _url = next_url.attrs["href"] if Next_url:self.get_urls (next_url) Pass 

At this point the 80 multi-page Python column URL has been put into the queue, just start the multi-threaded consumer crawler to parse the URL of our queue queues, the consumer code is as follows:

Class Parseurls (threading. Thread): Def __init__ (self, queue, t_name): self.queue = Queue Self.conn = MySQLdb.connect (Mysql_host, MY Sql_user, Mysql_password, Mysql_dbname, charset= "UTF8", use_unicode=true) Self.cursor = Self.conn.cursor () t Hreading.        Thread.__init__ (self, name=t_name) pass def Run (self): Self.parse_urls () def parse_urls (self):                While True:try:url = Self.queue.get (block=false) Self.queue.task_done () result = Requests.get (Url=url, timeout=10) selector = etree. HTML (result.text) title = Selector.xpath (r '//*[@class = "Entry-header"]/h1/text () ') title = Ti                Tle[0] If title is not none else none Author = Selector.xpath (r '//*[@class = "Copyright-area"]/a/text () ') Author = author[0] If author is not none else none items = Dict (Title=title, Author=author, url =url) self.inSert_mysql (items) except queue.                Empty:print ("Crawl done!") Break Def insert_mysql (self, value): Insert_sql = "INSERT into article (title, author, url) VALUES        (%s,%s,%s) "' Self.cursor.execute (Insert_sql, (value[" title "], value[" author "], value[" url "])) Self.conn.commit ()

Finally, we only need to start our thread in the main function!

if __name__ = = ' __main__ ':    q = queue. Queue ()    cw = crawlurls (q)    Cw.run ()    threads = []    thread_nums = ten    for I in range (0, thread_nums+1): C6/>BT = Parseurls (q, "thread" + str (i))        threads.append (BT)    for I in range (0, thread_nums+1):        threads[i]. Start ()    for I in range (0, thread_nums+1):        threads[i].join ()

Above is a simple thread+queue-based multi-threaded crawler!

Multi-threaded crawler based on Thread+queue

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.