A detailed description of concurrent parallel and global lock code sharing in Ruby

Source: Internet
Author: User
Tags benchmark
Recently in the study of Ruby, thinking about the content of their own learning to share, the following this article is mainly about the concurrency of Ruby and global lock related data, the text through the sample code introduced in very detailed, the need for friends can refer to the following to see together.

Objective

This article mainly introduces about Ruby concurrent parallel and global lock related content, share out for everyone reference study, the following words do not say, come together to see the detailed introduction bar.

Concurrency and parallelism

At development time, we are often exposed to two concepts: concurrency and parallelism, and almost all of the articles that talk about concurrency and parallelism mention one thing: concurrency is not equal to parallelism. So how do you understand this sentence?

    • Concurrent: The chef also received a menu of 2 guests ordered to be processed.

    • Sequential execution: If there is only one chef, then he can only complete one menu after another.

    • Parallel execution: If there are two cooks, then you can go in parallel and cook with two people.

By extending this example into our web development, we can understand that:

    • Concurrent: The server received two client-initiated requests at the same time.

    • Sequential execution: The server has only one process (thread) processing the request, completes the first request to complete the second request, so the second request needs to wait.

    • Parallel execution: The server has two processes (threads) processing requests, two requests can be responded to, and there are no sequential problems.

Based on the examples described above, how do we simulate such a concurrency behavior in ruby? Look at the following code:

1. Sequential execution:

Simulates an operation when there is only one thread.


Require ' benchmark ' def F1 puts "sleep 3 seconds in f1\n" sleep 3enddef F2 puts "sleep 2 seconds in f2\n" Sleep 2 endbenchm ARK.BM do |b| B.report do F1 F2 end end## # user  system  total  real## sleep 3 seconds in f1## sleep 2 seconds in f2## 0.0000 00 0.000000 0.000000 (5.009620)

The code is simple enough to simulate time-consuming operations with sleep. The elapsed times of sequential execution.

2. Parallel execution

Operations when simulating multi-threading


# next to the code above BENCHMARK.BM do |b| B.report do threads = [] Threads << thread.new {f1} threads << thread.new {F2} threads.each (&:join) en D end#### User  system  total  real## sleep 3 seconds in f1## sleep 2 seconds in f2## 0.000000 0.000000 0.000000 (3.005115)

We found that multi-threaded time-consuming and F1 time-consuming is similar, as we expected, the adoption of multithreading can be implemented in parallel.

Ruby's multithreading is capable of dealing with IO blocks, and when a thread is in the IO block state, other threads can continue to execute, reducing the overall processing time significantly.

Threads in Ruby

In the code example above, thread classes in Ruby are used, and Ruby can easily write multithreaded programs for the thread class. Ruby threads are a lightweight and efficient way to implement parallelism in your code.

The next step is to describe a scenario where concurrency occurs


def thread_test time = Time.now threads = 3.times.map do   thread.new do  sleep 3   end end puts "Don't wait 3 seconds to see me: #{tim E.now-time} "Threads.map (&:join) puts" now needs to wait 3 seconds to see me: #{time.now-time} "End Test # # # # # # # # # # # # # # # # # # # # # # 3 seconds to see me: 8.6e-05 # # Now it takes 3 seconds to Can see me: 3.003699

Thread creation is non-blocking, so the text can be output immediately. This simulates a concurrency behavior. Each thread sleep 3 seconds, in the case of blocking, multithreading can be implemented in parallel.

So did we finish the parallel capability at this time?

Unfortunately, my description above only mentions that we can simulate parallelism in a non-blocking situation. Let's look at another example:


Require ' benchmark ' def multiple_threads count = 0 threads = 4.times.map do  thread.new do  2500000.times {count + = 1} end end Threads.map (&:join) enddef single_threads time = Time.now Count = 0 thread.new do 10000000.times {count + = 1} end.joinendBenchmark.bm do |b| B.report {multiple_threads} b.report {single_threads}end##  user  system  total  real## 0.600000 0.010000 0.610000 (0.607230) # # 0.610000 0.000000 0.610000 (0.623237)

As can be seen here, even if we divide the same task into 4 threads in parallel, but the time does not decrease, why is it?

Because there is a global lock (GIL) exists!!!

Global lock

The ruby we typically use uses a mechanism called the Gil.

Even if we want to use multithreading to implement code parallelism, since this global lock exists, only one thread can execute the code at a time, depending on the implementation of the underlying operating system, as to which thread can execute.

Even if we have multiple CPUs, there are only a few options available for each thread's execution.

We have only one thread in the code above that can execute count + = 1 each time.

Ruby Multithreading does not reuse multi-core CPUs, and the overall time spent using multi-threading is not shortened, but the time spent may be slightly increased due to the impact of thread switching.

But when we sleep before, clearly realized the parallel Ah!

This is where Ruby design is advanced-all blocking operations can be parallel, including read-write files, and network requests that can be parallelized.


Require ' benchmark ' require ' net/http ' # Analog network request def multiple_threads uri = uri ("http://www.baidu.com") threads = 4. Times.map do  thread.new do  25.times {net::http.get (URI)} end end Threads.map (&:join) enddef single_threads uri = URI ("http://www.baidu.com") thread.new do 100.times {net::http.get (URI)} END.JOINENDBENCHMARK.BM do |b| B.report {multiple_threads} b.report {single_threads}end user  system  total  real0.240000 0.110000 0.350000 (3.659640) 0.270000 0.120000 0.390000 (14.167703)

The program has been blocked during network requests, and these blocks can be parallel in Ruby's operation, so it is much shorter in time.

The thought of GIL

So, since the existence of this Gil Lock, does that mean that our code is thread-safe?

It's a pity not, GIL. Switch to another worker during Ruby execution when some of the work points are in operation, and if you share some class variables, you may step on the pit.

So, when does the GIL switch to another thread to work in the execution of Ruby code?

There are several clear points of work:

    • Method call and method return, in these two places will be checked when the front of the Gil's lock timeout, whether to dispatch to another thread to work

    • All IO-related operations will also release the Gil lock for other threads to work

    • Manually releasing the Gil lock in the C extension's code

    • Another difficult thing to understand is that when Ruby stacks enter C stack, the Gil detection is triggered.

An example


@a = 1r = []10.times do |e| thread.new {@c = 1 @c + @a r << [E, @c]}endr## [[3, 2], [1, 2], [2, 2], [0, 2], [5, 2], [6, 2], [7, 2], [8, 2], [9 , 2], [4, 2]]

In the above R, although the order of E is different, but the value of @c is always maintained at 2, that is, each thread can retain the value of the current @c. There is no thread-simple dispatch.

If you add an action in the above code thread that might trigger the Gil, such as puts printing to the screen:


@a = 1r = []10.times do |e| thread.new {@c = 1 puts @c @c + = @a R << [E, @c]}endr## [[2, 2], [0, 2], [4, 3], [5, 4], [7, 5], [9, 6], [1 , 8], [6, 9], [8, 10]]

This will trigger the Gil Lock, the data is abnormal.

Summary

Web applications are mostly IO-intensive, and using the Ruby Multi-process + multithreaded model can significantly improve system throughput. The reason is that when a thread in Ruby is in IO block state, other threads can continue to execute, reducing the overall impact of Io block. However, due to the presence of Rub Y GIL (Global interpreter Lock), MRI Ruby does not really use multithreading for parallel computing.

PS. It is said that JRuby removed Gil, is the real meaning of multi-threading, can not only deal with IO Block, but also the full use of multi-core CPU to speed up the overall operation, there are plans to understand some.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.