In Ruby, multithreading Queue (Queue) is used to download blog articles and save them to a local file. rubyqueue
Ruby: complete code for downloading blog articles locally using multiple threads
Copy codeThe Code is as follows:
# Encoding: UTF-8
Require 'net/http'
Require 'thread'
Require 'open-uri'
Require 'nokogiri'
Require 'date'
$ Queue = Queue. new
# Number of pages in the article list
Page_nums = 8
Page_nums.times do | num |
$ Queue. push ("http://www.cnblogs.com/hongfei/default.html? Page = "+ num. to_s)
End
Threads = []
# Obtain the webpage source code
Def get_html (url)
Html = ""
Open (url) do | f |
Html = f. read
End
Return html
End
Def fetch_links (html)
Doc = Nokogiri: HTML (html)
# Extracting article links
Doc. xpath ('// div [@ class = "postTitle"]/A'). each do | link |
Href = link ['href ']. to_s
If href. include? "Html"
# Add work to the queue
$ Queue. push (link ['href '])
End
End
End
Def save_to (save_to, content)
F = File. new ("./" + save_to + ". html", "w + ")
F. write (content)
F. close ()
End
# Program start time
$ Total_time_begin = Time. now. to_ I
# Number of opened threads
ThreadNums = 10
ThreadNums. times do
Threads <Thread. new do
Until $ queue. empty?
Url = $ queue. pop (true) rescue nil
Html = get_html (url)
Fetch_links (html)
If! Url. include? "? Page"
Title = Nokogiri: HTML(html).css ('title'). text
Puts "[" + Time. now. strftime ("% H: % M: % S") + "]" + title + "" + url
Save_to ("pages/" + title. gsub (//, ""), html) if url. include? ". Html"
End
End
End
End
Threads. each {| t. join}
# End Time of the program
$ Total_time_end = Time. now. to_ I
Puts "thread count:" + threadNums. to_s
Puts "execution time:" + ($ total_time_end-$ total_time_begin). to_s + "second"
Multithreading
Copy codeThe Code is as follows:
$ Queue = Queue. new
# Number of pages in the article list
Page_nums = 8
Page_nums.times do | num |
$ Queue. push ("http://www.cnblogs.com/hongfei/default.html? Page = "+ num. to_s)
End
First declare a Queue queue, and then add the article list page to the Queue so that you can extract the article link from these list pages. In addition, the queue is declared as a global variable ($ ), so that the function can also be accessed.
My previous blog post list of a civil engineer has a total of 8 pages, so we need to assign page_nums a value of 8.
Copy codeThe Code is as follows:
# Number of opened threads
ThreadNums = 10
ThreadNums. times do
Threads <Thread. new do
Until $ queue. empty?
Url = $ queue. pop (true) rescue nil
Html = get_html (url)
Fetch_links (html)
If! Url. include? "? Page"
Title = Nokogiri: HTML(html).css ('title'). text
Puts "[" + Time. now. strftime ("% H: % M: % S") + "]" + title + "" + url
Save_to ("pages/" + title. gsub (//, ""), html) if url. include? ". Html"
End
End
End
End
Threads. each {| t. join}
Use Thread. new to create a Thread
After a thread is created, it will enter until $ queue. empty? Loop until the task queue is empty (that is, there is no URL to be collected)
Every time a thread is opened up, a url is obtained from the task queue (queue) and the webpage source code is obtained through the get_html function.
The URLs in the task queue are divided into paging URLs and article URLs.
If it is a paging url (the url contains "? Page), directly extract the article link.
If it is an article url, it is saved to the local device (save_to (), and the file name is the article title)
After creating a Thread in vitro, You need to execute the Thread # join method for the Thread to wait for the main Thread,
The main thread is not terminated until all threads are executed.
Code execution time statistics
Copy codeThe Code is as follows:
# Program start time
$ Total_time_begin = Time. now. to_ I
# Execution Process
# End Time of the program
$ Total_time_end = Time. now. to_ I
Puts "execution time:" + ($ total_time_end-$ total_time_begin). to_s + "second"
The # now method of the TIme module can get the current TIme, and then use to_ I to convert the current TIme to the number of seconds that have elapsed since January 1, 1970 00:00:00 UTC.
Obtain webpage source code
Copy codeThe Code is as follows:
# Obtain the webpage source code
Def get_html (url)
Html = ""
Open (url) do | f |
Html = f. read
End
Return html
End
In ruby, the Net: HTTP module and OpenURI module are used to obtain web pages. The OpenURI module is the simplest and can operate a specified webpage as a common file in diameter.
Execution result: multiple threads are used to collect more than 130 articles, which takes 15 seconds (about seconds for a single thread)