Using multithreaded queues (queue) in Ruby to download blog posts save to local file _ruby topics

Source: Internet
Author: User

Ruby: Multithreaded download of blog posts to local complete code

Copy the code The code is as follows:

#encoding: utf-8
require 'net / http'
require 'thread'
require 'open-uri'
require 'nokogiri'
require 'date'
$ queue = Queue.new
#Article list pages
page_nums = 8
page_nums.times do | num |
  $ queue.push ("http://www.cnblogs.com/hongfei/default.html?page=" + num.to_s)
end

threads = []
#Get webpage source code
def get_html (url)
  html = ""
  open (url) do | f |
    html = f.read
  end
  return html
end

def fetch_links (html)
  doc = Nokogiri :: HTML (html)
  #Extract article link
  doc.xpath ('// div [@ class = "postTitle"] / a'). each do | link |
    href = link ['href']. to_s
    if href.include? "html"
      #add work to the queue
      $ queue.push (link ['href'])
    end
  end
end

def save_to (save_to, content)
  f = File.new ("./"+ save_to +". html "," w + ")
  f.write (content)
  f.close ()
end

#Program start time
$ total_time_begin = Time.now.to_i

#Developed threads
threadNums = 10
threadNums.times do
  threads << Thread.new do
    until $ queue.empty?
      url = $ queue.pop (true) rescue nil
      html = get_html (url)
      fetch_links (html)
      if! url.include? "?? page"
        title = Nokogiri :: HTML (html) .css ('title'). text
        puts "[" + Time.now.strftime ("% H:% M:% S") + "]" "" title + "" "+ url
        save_to ("pages /" + title.gsub (/ \ //, ""), html) if url.include? ". html"
      end
    end
  end
end
threads.each {| t | t.join}

#Program end time
$ total_time_end = Time.now.to_i
puts "number of threads:" + threadNums.to_s
puts "execution time:" + ($ total_time_end-$ total_time_begin) .to_s + "second"
Multithreading part explained

Copy the code The code is as follows:

$ queue = Queue.new
#Article list pages
page_nums = 8
page_nums.times do | num |
  $ queue.push ("http://www.cnblogs.com/hongfei/default.html?page=" + num.to_s)
end
First declare a Queue, and then add article list pages to the queue so that article links can be extracted from these list pages, and queue is declared as a global variable ($) so that it can be accessed in functions.

My previous list of blog posts for civil engineers has a total of 8 pages, so I need to assign 8 to page_nums
Copy the code The code is as follows:

#Developed threads
threadNums = 10
threadNums.times do
  threads << Thread.new do
    until $ queue.empty?
      url = $ queue.pop (true) rescue nil
      html = get_html (url)
      fetch_links (html)
      if! url.include? "?? page"
        title = Nokogiri :: HTML (html) .css ('title'). text
        puts "[" + Time.now.strftime ("% H:% M:% S") + "]" "" title + "" "+ url
        save_to ("pages /" + title.gsub (/ \ //, ""), html) if url.include? ". html"
      end
    end
  end
end
threads.each {| t | t.join}
Create a thread through Thread.new

After creating the thread, it will enter until $ queue.empty? Loop until the task queue is empty (ie: there is no URL to collect)
The developed thread will get a url from the task queue (queue) every time, and obtain the web page source code through the get_html function
Since the url in the task queue has two types: page url and article url, it is necessary to distinguish.
If it is a pagination url (url contains "? Page"), then directly extract the article link.
If it is the article url, save it to the local (save_to (), the file name is the article title)
Outside the loop, after creating the thread, you need to execute the Thread # join method to create the main thread to wait,
Do not end the main thread until all threads have been executed

Code execution time statistics

Copy the code The code is as follows:

#Program start time
$ total_time_begin = Time.now.to_i
#Implementation process
#Program end time
$ total_time_end = Time.now.to_i
puts "execution time:" + ($ total_time_end-$ total_time_begin) .to_s + "second"
The #now method of the TIme module can obtain the current time, and then use to_i to convert the current time to the number of seconds since 00:00:00 UTC on January 1, 1970.

Get webpage source code

Copy the code The code is as follows:

#Get webpage source code
def get_html (url)
  html = ""
  open (url) do | f |
    html = f.read
  end
  return html
end
In Ruby, the Net :: HTTP module and the OpenURI module are used to obtain web pages. The OpenURI module is the simplest and can operate the specified webpage as a normal file.

Execution result: Using multi-thread to collect more than 130 articles, it takes 15 seconds (single thread: about 47s)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.