5Use other synchronization Technologies
Another synchronization mechanism is the monitor, which Ruby implements in the monitor. RB library. This technology is more advanced than mutex. In particular, mutex locks cannot be nested, but listener locks can.
Some trivial things have never happened. That's because no one will write like the following:
$ Mutex = mutex. New
$ Mutex. Synchronize do
$ Mutex. Synchronize do
#...
End
End
But it may happen (or through a recursive call ). In any of these cases, the result is a deadlock. Avoiding deadlocks in such cases is the advantage of mixed monitor.
$ Mutex = mutex. New
Def some_method
$ Mutex. Synchronize do
#...
Some_other_method # deadlock!
End
End
Def some_other_method
$ Mutex. Synchronize do
#...
End
End
Monitor Mixin is typically used to expand any object. The new_cond method can be used to instantiate a conditional variable.
The conditionvariable class comes from a third-party library that enhances monitor. RB. It has methods wait_until and wait_while, and its blocks are condition-based threads. Which block a thread based on a condition. It also allows pause while waiting, because the wait method has a timeout parameter, which is a number of seconds (nil by default ).
Because we use the example of a thread quickly, we use the monitor technology in listing7.5 to rewrite the queue and sizedqueue classes.CodeWritten by shugo Maeda and licensed for use.
Listing 7.5 implementing a queue with a monitor
# Author: shugo Maeda
Require "Monitor"
Class queue
Def initialize
@ Que = []
@ Monitor = monitor. New
@ Empty_cond = @ monitor. new_cond
End
Def Enq (OBJ)
@ Monitor. Synchronize do
@ Que. Push (OBJ)
@ Empty_cond.signal
End
End
Def DEQ
@ Monitor. Synchronize do
While @ Que. Empty?
@ Empty_cond.wait
End
Return @ Que. Shift
End
End
End
Class sizedqueue <queue
ATTR: Max
Def initialize (max)
Super ()
@ Max = max
@ Full_cond = @ monitor. new_cond
End
Def Enq (OBJ)
@ Monitor. Synchronize do
While @ Que. Length >=@ Max
@ Full_cond.wait
End
Super (OBJ)
End
End
Def DEQ
@ Monitor. Synchronize do
OBJ = super
If @ Que. Length <@ Max
@ Full_cond.signal
End
Return OBJ
End
End
Def max = (max)
@ Monitor. Synchronize do
@ Max = max
@ Full_cond.broadcast
End
End
End
The sync. RB library completes the thread steps in more ways. For what we know and care about, it uses a counter to implement a two-phase lock. When writing a book, its only document is the library itself.
6, Allowing timeout of an operation
In many cases, we need to have the maximum time allowed to complete an action. This avoids endless loops and allows additional control layers for processing. This is a useful feature in the network environment. In other environments, we may or may not be able to get a response from a remote server.
The timeout. RB library uses a thread-based solution to solve this problem. The timeout method executes the write method to call the associated block. When the specified number of seconds is reached, it throws a timeouterror error and can be captured using the rescue clause (see listing7.6 ).
Listing 7.6 Timeout example
Require "timeout. RB"
Flag = false
Answer = Nil
Begin
Timeout (5) Do
Puts "I want a cookie! "
Answer = gets. Chomp
Flag = true
End
Rescue timeouterror
Flag = false
End
If flag
If answer = "cookie"
Puts "Thank you! Chomp, chomp ,..."
Else
Puts "that's not a cookie! "
Exit
End
Else
Puts "hey, too slow! "
Exit
End
Puts "Bye now ..."
7, Wait for the event
In many cases, we want to monitor one or more threads from outside when other threads do other things. The example here is artificial, but it demonstrates general principles.
Here, we can see three threads complete an application.Program. The other thread is simply woken up every five seconds. Check the global variable $ flag. when it sees this flag setting, it wakes up the other three threads. It saves three worker threads to interact directly with the other two threads and tries to wake them up.
$ Job = false
Work1 = thread. New {job1 ()}
Work2 = thread. New {job2 ()}
Work3 = thread. New {job3 ()}
Thread5 = thread. New {thread. Stop; job4 ()}
Thread6 = thread. New {thread. Stop; job5 ()}
Watcher = thread. New do
Loop do
Sleep 5
If $ flag
Thread5.wakeup
Thread6.wakeup
Thread. Exit
End
End
End
During the running of the job method, if the variable $ flag turns to true at any time, thread5 and thread6 are guaranteed to start in five seconds. After that, the watcher thread terminates.
In the next example, we will wait for the file to be created. We check every thirty seconds. If we see it, we start another thread. During this period, other threads can do anything. In fact, we are observing three files respectively.
Def waitfor (filename)
Loop do
If file. exist? Filename
File_processor = thread. New do
Process_file (filename)
End
Thread. Exit
Else
Sleep 30
End
End
End
Waiter1 = thread. New {waitfor ("Godot ")}
Sleep 10
Waiter2 = thread. New {waitfor ("Guffman ")}
Sleep 10
Headwaiter = thread. New {waitfor ("head ")}
# Main thread goes off to do other things...
In many other cases, the thread will wait for an external event, such as a network application, and the socket on the server side is slow or unreliable.
8, Continuing processing during I/O
An application often has one or more lengthy or time-consuming I/O operations. This will happen when a user inputs it, because the user input on the keyboard is even slower than the disk operation. We can use this time through threads.
Consider the International Chess case. It must wait for people to move it. Of course, we only represent the framework of this concept.
Let's assume that the iterator predictmove will repeatedly generate similar moves that people may make (and then determine the programmer's own response to these moves ). Then when people move, they may be ready to move as expected.
Scenario ={}# move-response hash
Humans_turn = true
Thinking_ahead = thread. New (board) Do
Predictmove do | M |
Scenario [m] = myresponse (board, m)
Thread. Exit if humans_turn = false
End
End
Human_move = gethumanmove (Board)
Humans_turn = false # Stop the thread gracefully
# Now we can access scenario which may contain
# Move the person just made...
We must make a statement that a real chess player generally does not work in this way. Usually, you are concerned about fast search and passing through a certain depth. In real life, the best solution is to store some of the obtained State information during the thinking thread, then continue in the same way until the program finds a good response or exceeds its time.
9, Implementation and iteration
Suppose you want to iterate over more than one object. That is to say, You Want To iterate the first, second, and third of every n objects.
For more details, see the following example. Here we assume that compose is the name of the method that provides the iterator composition. We also assume that each specific object has a default iterator each used, and each object puts forward an entry each time.
Arr1 = [1, 2, 3, 4]
Arr2 = [5, 10, 15, 20]
Compose (arr1, arr2) Do | a, B |
Puts "# {A} and # {B }"
End
# Shoshould output:
#1 and 5
#2 and 10
#3 and 15
#4 and 20
We can use a more thoughtful approach to complete iteration on each object, one by one, and store results. But if we want a better solution, it is actually not to store all entries, and the thread is the only easy solution. Our answer is in listing7.7.
Listing 7.7 iterating in parallel
Def compose (* objects)
Threads = []
For OBJ in objects do
Threads <thread. New (OBJ) Do | myobj |
Me = thread. Current
Me [: queue] = []
Myobj. Each do | element |
Me [: queue]. Push Element
End
End
End
List = [0] # dummy non-nil value
While list. nitems> 0 do # Still some non-Nils
List = []
For thr in threads
List <thr [: queue]. Shift # remove one from each
End
Yield list if list. nitems> 0 # Don't yield all Nils
End
End
X = [1, 2, 3, 4, 5, 6, 7, 8]
Y = "firstn secondn thirdn fourthn %thn"
Z = % W [a B C D E F]
Compose (x, y, z) Do | a, B, c |
P [a, B, c]
End
# Output:
#
# [1, "firstn", "a"]
# [2, "secondn", "B"]
# [3, "thirdn", "C"]
# [4, "fourthn", "D"]
# [5, "Thn", "E"]
# [6, nil, "F"]
# [7, nil, nil]
# [8, nil, nil]
Note that we do not assume that all objects have the same number of entries during iteration. If an iterator is run before another iterator, it generates the nil value until the longest iterator is run.
Of course, you can write a more general method to capture more than one value from each iterator. (After all, not all iterators return only one value at a time .) We can set the first parameter to specify the number of iterations.
It is also feasible to use any iterator instead of the default each. We can pass their names as strings and use send to call them. Of course, this requires other tips.
However, we think the example here is sufficient for most cases. We will leave other changes to you as exercises.
10Parallel Recursive Deletion
Just for fun, let's use the "external data manipulation" example in chapter 4 and parallelize it. (No, we are not talking about using multiple processors in parallel .) Here we provide a recursive deletion routine in the form of a thread. When the directory entry we found is a directory, we start a new thread to traverse the directory and delete its content.
We store the trail of the thread we created in an array called threads; because it is a local variable, each thread has its own copy of the array. It can only be accessed by one thread at a time, and does not need to be accessed synchronously here.
Note that we also pass the full file name to the thread block so that we don't have to worry about a modifiable value accessed by the thread. The thread uses FN as a local copy of the same variable.
When we traverse a directory, we want to wait for the thread we created before deleting the directory that has completed our work.
Def delete_all (DIR)
Threads = []
Dir. foreach (DIR) Do | E |
# Don't bother with. And ..
Next if [".", "..."]. Include? E
Fullname = dir + file: separator + E
If filetest: directory? (Fullname)
Threads <thread. New (fullname) Do | Fn |
Delete_all (FN)
End
Else
File. Delete (fullname)
End
End
Threads. Each {| T. Join}
Dir. Delete (DIR)
End
Delete_all ("/tmp/stuff ")
Is it actually faster than the non-thread version? We found that the responses were inconsistent. It depends on your operating system and the actually deleted directory structure, that is, its depth, file size, and so on.
Iii. Summary
In many cases, threads are very useful, but they may have some problems with code and debugging. This is true when we use the synchronous method to achieve the correct results.