In short, a program has at least one process, and a process has at least one thread.
The thread's dividing scale is smaller than the process, which makes the multi-thread procedure high concurrency.
In addition, the process has a separate memory unit during execution, and multiple threads share memory, which greatly improves the efficiency of the program operation.
Threads are still different from the process during execution. Each separate thread has a program run entry, sequence of sequence execution, and exit of the program. However, threads cannot be executed independently, and must be dependent on the application, which provides multiple threads of execution control.
From a logical point of view, the meaning of multithreading is that in an application, multiple execution parts can be executed concurrently. However, the operating system does not consider multiple threads as separate applications to implement scheduling and management of processes and resource allocation. This is the important difference between processes and threads.
A process is a program with a certain independent function about a single run activity on a data set, a process that is an independent unit of the system's resource allocation and scheduling.
A thread is an entity of a process that is the basic unit of CPU dispatch and dispatch, which is a smaller unit that can run independently than a process. The thread itself basically does not own the system resources, only has a point in the operation of the necessary resources (such as program counters, a set of registers and stacks), However, it can share all of the resources owned by the process with other threads that belong to one process.
One thread can create and revoke another thread, and can execute concurrently between multiple threads in the same process.
The main difference between processes and threads is that they are different ways to manage operating system resources. The process has a separate address space, and after a process crashes, it does not affect other processes in protected mode, and the thread is just a different execution path in a process. Thread has its own stack and local variables, but there is no separate address space between the threads, a thread dead is equal to the entire process dead, so the multi-process program is more robust than multithreaded programs, but in the process of switching, the cost of large resources, efficiency is worse. But for some concurrent operations that require simultaneous and shared variables, only threads can be used, and processes cannot be used. If you are interested in deepening, I suggest you look at "Modern operating system" or "operating system design and implementation." Say a little more clearly about a problem.
5.1 Introduction
A process is an area of memory that contains some resources. The operating system uses the process to divide its work into functional units.
One or more execution units contained in a process are called threads. The process also has a private virtual address space that can be accessed only by the threads it contains.
When running. NET program, the process also includes the software layer called the CLR in its memory space. The previous chapter has described the CLR in detail. The software layer is loaded by the runtime host during process creation (see section 4.2.3).
A thread can only belong to a process and it can only access resources owned by that process. When the operating system creates a process, the process automatically requests a thread that is called the main thread or the first. The main thread executes the runtime host, and the runtime host is responsible for loading the CLR.
An application (application) is made up of one or more collaborative processes. For example, the Visual Studio development environment is to use one process to edit the source file and use another process to complete the application that compiles the work.
Under the Windows NT/2000/XP operating system, we can view all applications and processes at any time through the Task Manager. Although only a few applications are open, there are typically about 30 processes running at the same time. In fact, in order to manage the current session and taskbar and some other tasks, the system executes a lot of processes.
5.2 Process 5.2.1 Introduction
In a 32-bit Windows operating system running on a 32-bit processor, a process can be treated as a linear memory space of 4GB (232 bytes), starting at 0x00000000 and ending at 0xFFFFFFFF. This memory space cannot be accessed by other processes, so it is called the private space of the process. This space is divided into two pieces, 2GB by the system all, the remaining 2GB by the user.
If there are n processes running on the same machine, then you will need NX4GB of mass ram, which is not true.
- Windows allocates memory for each process on demand, and 4GB is the upper limit on the space occupied by a process in a 32-bit system.
- The memory required by the process is divided into 4KB of memory pages, and depending on the use of the memory pages stored on the hard disk or loaded into RAM, through the system of this virtual memory mechanism, we can effectively reduce the actual memory demand. Of course, these are transparent to both the user and the developer.
5.2.2 System.Diagnostics.Process Class
An instance of the System.Diagnostics.Process class can refer to a process, and the referenced process contains the following types.
- The current process for the instance.
- Other processes on this computer except for the current process.
- A process on a remote machine.
Through the methods and fields contained in the class, you can create or destroy a process, and you can obtain information about a process. Some common tasks that are implemented using this class are discussed below.
5.2.3 Creating and destroying child processes
The following program creates a new process called a child process. In this case, the initial process is called the parent process. The child process launches a Notepad application. The parent process's thread waits 1 seconds to destroy the child process. The effect of the program is to open and close Notepad.
Example 5-1
The static method start () can use the existing Windows file name extension affinity mechanism. For example, we can use the following code to perform the same operation.
By default, a child process inherits the security context of its parent process. However, you can also use an overloaded version of the Process.Start () method to start the child process in the security context of any user, and of course through a system.diagnostics. An instance of the ProcessStartInfo class to provide the user's user name and password.
5.2.4 Avoid running multiple instances of the same application at the same time on a single machine
Some applications require this functionality. In fact, it is generally meaningless to run multiple instances of an application concurrently on the same machine.
Until now, the most common way for developers to meet the above constraints under Windows is to use a known mutex (named mutex) technology (see section 5.7.2). However, using this technique to satisfy the above constraints has the following drawbacks:
- This technique has a small, potential risk of making the name of the mutex used by other applications. In this case, the technology will no longer be valid and will cause bugs that are difficult to detect.
- This technique does not solve the general problem of allowing only one application to produce n instances.
Fortunately, in the System.Diagnostics.Process class, there are static methods such as GetCurrentProcess () (returning the current process) and getpro-cesses () (Returning all processes on the machine). In the following program we have found an elegant and simple solution to the above problem.
Example 5-2
After specifying the name of the remote machine through the method parameter, the GetProcesses () method can also return all processes on the remote machine.
5.2.5 terminating the current process
You can call the static method in the System.Environment class, exit (int exitCode), or failfast (stringmessage) to terminate the current process. The exit () method is the best choice, it will completely terminate the process and return the specified exit code value to the operating system. This is called a complete termination because all cleanup work for the current object and the execution of the finally block will be done by different threads. Of course, terminating the process will take some time.
As the name implies, the FailFast () method can quickly terminate a process. The precaution of the Exit () method will be ignored by it. Only a critical error that contains the specified information is logged by the operating system. You may want to use this method when probing a problem, as the complete termination of the program can be seen as a cause of data deterioration.
5.3 Thread 5.3.1 Introduction
A thread contains the following content.
- A pointer to the currently executed instruction;
- a stack;
- A collection of register values that defines a portion of the value that describes the processor state of the executing thread;
- A private data area.
All of these elements are attributed to the name of the thread execution context. All threads in the same process can access the address space contained by the process and, of course, all resources stored in that space.
We are not prepared to discuss the issue of thread execution in kernel mode or user mode. Although. NET previous Windows used both modes and still exist, but they are not visible to the. NET Framework.
Using some threads in parallel is often a natural response to the algorithm we implement. In fact, an algorithm is often composed of a series of tasks that can be executed concurrently. However, it is important to note that the use of a large number of threads will result in excessive context switching, which ultimately affects performance.
Again, a few years ago, we noticed that Moore's law, which predicts a doubling of processor speed every 18 months, is no longer true. The processor's frequency is stuck up and down 3ghz~4ghz. This is due to physical limitations that take a while to make a breakthrough. At the same time, larger processor manufacturers such as AMD and Intel are now turning their targets to multicore chips in order to compete in performance. So we can expect that this type of architecture will be widely adopted in the next few years. In this case, the only way to improve application performance is to make reasonable use of multithreaded technology.
5.3.2 Managed threads with Windows threads
Must be understood, executed. NET app is actually still a Windows thread. However, when a thread is known by the CLR, we refer to it as a managed thread. Specifically, the thread created by the code of the trustee is the managed thread. If a thread is created by unmanaged code, it is an unmanaged thread. However, once the thread executes the managed code it becomes a managed thread.
The difference between a managed thread and an unmanaged thread is that the CLR creates an instance of the System.Threading.Thread class to represent and manipulate the former. In an internal implementation, the CLR stores a list of all the trusted pipeline threads in a place called ThreadStore.
The CLR ensures that every managed thread executes in an AppDomain at any time, but this does not mean that a thread will always be in an AppDomain, and it can go to other AppDomain over time. For the concept of AppDomain see 4.1.
From a security standpoint, the primary user of a managed thread is irrelevant to the Windows master user in the underlying unmanaged thread.
5.3.3 preemptive multi-task processing
We can ask ourselves the following question: My computer has only one processor, but in Task Manager we can see hundreds of lines is impersonating running on the machine at the same time! How is that possible?
Thanks to preemptive multitasking, the problem is made possible by its scheduling of threads. As part of the Windows kernel, the scheduler divides the time slices into segments of time slices. These intervals are precision in milliseconds and are not fixed in length. For each processor, each time slice serves only a single thread. The rapid execution of threads gives us the illusion that they are running at the same time. We do a context switch between the two time slices. The advantage of this approach is that threads that are waiting for certain Windows resources will not waste time slices until the resource is valid.
The reason for this kind of multi-tasking management is to use the adjective of preemption, because the thread in this way will be forced to interrupt the system. Those who are curious about this should be aware that during context switching, the operating system inserts an instruction that jumps to the next context switch in the code that the next thread will execute. The instruction is a soft interrupt, and if the thread terminates before encountering this instruction (for example, it is waiting for a resource), then the designation will be deleted and the context switch will occur in advance.
The main disadvantage of preemptive multitasking is that you must use a synchronization mechanism to protect resources from being accessed in a disorderly way. In addition, there is another multi-tasking management model, called coordinated multi-task management, in which the thread-to-thread switching is done by the threads themselves. The model is generally considered too dangerous because the risk of switching between threads is too high. As we explained in section 4.2.8, this mechanism is used internally to improve the performance of some servers, such as SQL Server2005. But the Windows operating system only implements preemptive multitasking.
5.3.4 process-to-thread priority
Some tasks have a higher priority than other tasks, and they require the operating system to request more processing time for them. For example, some peripheral drives that are owned by the main processor must not be interrupted. Another type of high-priority task is the graphical user interface. In fact, users do not like to wait for the user interface to be redrawn.
Those from the Win32 world know that in the lower level of the CLR, which is the Windows operating system, each thread can be given a 0~31 priority. But you can't be in. NET is also used in the world because:
- They cannot describe their meaning.
- As time goes by, these values are very easy to change.
1. Priority of the process
You can use Priorityclass{get;set of type ProcessPriorityClass in the process class;} Property assigns a priority to the process. The System.Diagnostics.ProcessPriorityClass enumeration contains the following values:
If the value of the Priorityboostenabled property in a Process class is true (the default is true), the priority of the process will be increased by one unit when it occupies the foreground window. This property is accessible only if an instance of the process class refers to a native process.
You can use the Task Manager to change the priority of a process by clicking right > Setting priority > Selecting from the 6 values provided (and the same as described) in the selected process.
The Windows operating system has an idle process with a priority of 0. The process cannot be used by any other process. By definition, the active degree of a process is expressed as a percentage of the time that is: 100% minus the time spent in the idle process.
2. Priority of Threads
Each thread can combine the priority of the process to which it belongs and use a priority{get;set of type threadpriority in the System.Threading.Thread class;} Properties define their own priority. System.threading.thread-priority contains the following enumeration values:
In most applications, you do not need to modify the priority of processes and threads, and their default values are normal.
5.3.5 System.Threading.Thread Class
The CLR automatically associates an instance of a System.Threading.Thread class with each managed thread. You can use this object to manipulate threads from the thread itself or from other threads. You can also get the object of the current thread through the static property CurrentThread of the System.Threading.Thread class.
The thread class has a feature that allows us to easily debug multithreaded applications that allow us to use a string to name a thread:
5.3.6 creating a thread with join
You can create a new thread in the current process by simply creating an instance of the thread class. The class has more than one constructor, They will accept a delegate object of type System.Threading.ThreadStart or System.threading.parame-trizedthreadstart as a parameter, and the thread is created to execute the method referenced by the delegate object first. A delegate object that uses the Parametrizedthreadstart type allows the user to pass in an object as a parameter to the method that the new thread will execute. Some constructors of the thread class also accept an integer parameter to set the maximum size of the stack to be used by the thread, which is at least 128KB (that is, 131072 bytes). After you have created an instance of the thread type, you must call the Thread.Start () method to actually start the thread.
Example 5-3
The program outputs:
In this example, we use the Join () method to suspend the current thread until the thread that calls the join () method finishes executing. The method also has an overloaded version that contains parameters that specify the number of milliseconds to spend for the maximum time (that is, timeout) to wait for the thread to end. If the work in the thread ends within the specified time-out period, the version of the Join () method returns a Boolean of true.
5.3.7 suspending a thread
You can use the sleep () method of the thread class to suspend an executing thread for a specific period of time, or to set the pending time by an integer value in milliseconds or an instance of a system.timespan struct. An instance of the structure can set a time period with a precision of 1/10 MS (100NS), but the sleep () method has a maximum accuracy of only 1ms.
We can also use the Suspend () method of the thread class to suspend the activity of a thread from the thread that is going to suspend itself or another thread. In both cases, the thread will be blocked until another thread calls the resume () method. The Suspend () method does not immediately suspend a thread relative to the sleep () method, but the CLR suspends the thread after it reaches the next security point. The concept of safety points is described in section 4.7.11.
5.3.8 Termination of a thread
A thread can terminate itself in the following scenario.
- Exit from the method that you started executing (the main () method in the main thread, the method referenced by the ThreadStart delegate object in other threads).
- Terminated by himself.
- Terminated by another thread.
The first case is less important, and we will focus on two other situations. In both cases, the abort () method can be used (either through the current thread or from a thread other than the current thread). Use this method to throw an exception of type ThreadAbortException in the thread. Because the thread is in a special state called abortrequested, the exception has a special feature: when it is caught by exception handling, it is automatically re-thrown. Only the static method called Thread.resetabort () in exception handling (if we have sufficient permissions) can prevent it from propagating.
Example 5-4 suicide of the main thread
When thread A calls the Abort () method on thread B, it is recommended to call the join () method of B, so that a waits until B terminates. The Interrupt () method can also terminate a thread that is in a blocking state (that is, because a call to wait (), Sleep (), or join () is blocked by one of the methods. The method behaves differently depending on whether the thread being terminated is in a blocking state.
- If the method is called by another thread, the thread that is to be terminated is in a blocked state, resulting in a threadinterruptedexception exception.
- If the method is called by another thread, the thread to be terminated is not in a blocked state, and an exception is thrown once the thread enters a blocking state. This behavior is the same as when the thread calls the interrupt () method on its own.
5.3.9 foreground thread and background thread
The thread class provides a Boolean property of Isbackground{get;set}. When the current thread is still running, it prevents the process from being terminated. On the other hand, the background thread will be automatically terminated by the CLR (call the Abort () method) Once the referred process is no longer in the foreground thread. The default value of IsBackground is false, which means that all threads default to the foreground state.
5.3.10 state diagram of the entrusted pipeline process
The thread class has a field of System.Threading.ThreadState enumeration type ThreadState, which contains the following enumeration values:
A specific description of each state can be found in an article called "Threadstateenumeration" on MSDN. The enumeration type is a BITS field, which means that an instance of that type can represent multiple enumeration values at the same time. For example, a thread can be in the three states of running, abortrequested, and background at the same time. The concept of the bits domain is described in section 10.11.3.
Based on the knowledge we learned in the previous chapters, we defined the simplified state diagram shown in 5-1.
Figure 5-1 Simplified managed thread state diagram
5.4 Introduction to Accessing resource synchronization
Synchronization is used in the calculation of multi-threaded applications (one or more processors). In fact, these applications are characterized by having multiple execution units that may conflict when accessing resources. Synchronization objects are shared between threads, and the purpose of synchronizing objects is to be able to block one or more threads until another thread satisfies a particular condition.
We will see that there are multiple synchronization classes and synchronization mechanisms, each of which targets one or more specific requirements. If you are using synchronization to build a complex multithreaded application, it is necessary to master the content of this chapter first. We will try to differentiate them in the following, especially to point out the most subtle differences between the various mechanisms.
A reasonable synchronization of a program is one of the most elaborate software development tasks, a single topic is enough to write a few books. Before delving into the details, you should first confirm that the use of synchronization is unavoidable. In general, using some simple rules can keep us away from synchronization problems. The affinity rules for threads and resources in these rules are described later.
It should be recognized that the difficulty of synchronizing access to resources in a program comes from the dilemma of using fine-grained or coarse-grained locks. If you are using coarse-grained synchronization when accessing resources, you can simplify the code but also expose yourself to the problem of contention bottlenecks. If the granularity is too thin, the code becomes so complicated that maintenance is annoying. Then you'll encounter deadlocks and race conditions. These are the issues that will be covered in the following sections.
So before we start talking about synchronization mechanisms, it's important to understand the concepts of race conditions and deadlocks.
5.4.1 Race condition
A race condition is a special case in which each execution unit executes an action in an illogical order, leading to unexpected results.
As an example, when the thread T modifies the resource R, it releases its write access to r, and then re-uses the read-access of R again, assuming that its state remains in its state after it releases it. However, after the write access is released to regain read access at this interval, another thread may have modified the state of R.
Another example of a classic race condition is the producer/consumer model. Producers typically use the same physical memory space to store the information being produced. In general, we will not forget to protect this space between the concurrent visits of producers and consumers. It is easy to forget that the producer must ensure that the old information is read by the consumer before the new information is produced. If we do not take appropriate precautions, we will face the danger that the information produced is never consumed.
If the static condition is not properly managed, it will result in a security system vulnerability. Another instance of the same application is likely to cause events that are not expected by a series of developers. In general, you must protect the type of Boolean write access that is used to confirm the identification result. If this is not done, it is likely that it has been modified since its status is set by the authentication mechanism until it is read to protect access to the resource. Many of the known security vulnerabilities are attributed to inappropriate management of static conditions. One of them even affected the kernel of the UNIX operating system.
5.4.2 Deadlock
Deadlock refers to situations where two or more execution units are blocked by waiting for each other to end. For example:
A thread T1 gain access to the resource R1.
A thread T2 gain access to the resource R2.
T1 requests access to R2 but has to wait because this power is accounted for by T2.
T2 requests access to R1 but has to wait because this power is accounted for by T1.
T1 and T2 will remain in the waiting state forever, when we are in a deadlock situation! This problem is more secret than most of the bugs you have encountered, there are three main solutions for this problem:
- One thread is not allowed to access multiple resources at the same time.
- Defines a relationship order for the acquisition of resource access rights. In other words, when a thread has gained access to R1, it will not be able to gain access to R2. Of course, the release of access must follow the reverse order.
- The system defines a maximum wait time (timeout) for all requests to access the resource and gracefully handles the failure of the request. Almost all of them. NET provides this functionality in a synchronous mechanism.
The first two technologies are more efficient but more difficult to implement. In fact, they all need strong constraints, and this is becoming increasingly difficult to maintain as the application evolves. However, there is no failure to use these technologies.
Large projects typically use a third method. In fact, if the project is large, it will generally use a lot of resources. In this case, the probability of a conflict between resources is very low, which means that the failure situation is relatively rare. We think this is an optimistic approach. With the same spirit, we described an optimistic database access model in section 19.5.
5.5 Synchronizing 5.5.1 volatile fields with the Interlocked class using volatile fields
Volatile fields can be accessed by multiple threads. We assume that these accesses do not have any synchronization. In this case, some internal mechanisms for managing code and memory in the CLR will be responsible for synchronizing, but there is no guarantee that read access to that field always reads the most recent value, whereas a field declared as volatile can provide such assurances. In C #, if a field uses the volatile keyword before its declaration, the field is declared volatile.
Not all fields can become volatile, and there is a condition for the field to be this type. If a field is to be volatile, its type must be one of the types listed below:
- A reference type (where only references that access the type are synchronous, and access to its members are not synchronized).
- A pointer (in an unsafe code block).
- SByte, Byte, short, ushort, int, uint, char, float, bool (double, long, and ulong when working on a 64-bit processor).
- An enumeration type that uses the following underlying types: Byte, sbyte, short, ushort, int, uint (double, long, and ulong when working on a 64-bit processor).
As you may have noticed, only the type of the value or the number of digits that the reference is not greater than the native integer value (4 or 8 is determined by the underlying processor) can be volatile. This means that concurrent access to a larger value type must be synchronized, as we will discuss here.
5.5.2 System.Threading.Interlocked Class
Experience shows that resources that need to be protected in multi-threaded situations are usually integer values, and the most common operations for these shared integer values are increase/decrease and addition. NETFramework uses the System.Threading.Interlocked class to provide a specialized mechanism for accomplishing these specific operations. This class provides three static methods for increment (), decrement (), and Add (), which are used to increment, decrement, and add operations on int or long variables, which are passed in as arguments by reference. We think that the use of the Interlocked class makes these operations atomic.
The following program shows how two threads can concurrently access an integer variable named counter. One thread increments it 5 times and the other decrements it 5 times.
Example 5-5
The output of the program, which is output in a non-deterministic manner, means that the results of each display are different:
If we don't let these threads hibernate for 10 milliseconds after each modification, they will have enough time to complete their tasks in a single time slice, so there will be no cross-cutting operations, let alone concurrent access.
Additional features provided by the 5.5.3 interlocked class
The Interlocked class also allows Exchange () static methods to exchange the state of certain variables in the form of atomic operations. You can also use the CompareExchange () static method to exchange two values as atomic operations based on satisfying a specific condition.
5.6 Synchronizing with the lock keyword for C # using the System.Threading.Monitor class
It is undoubtedly important to do simple operations in an atomic manner, but this is far from covering all the cases that need to be synchronized. The System.Threading.Monitor class almost allows any piece of code to be set to be executed by only one thread at a time. We call this code a critical section.
5.6.1 Enter () method and exit () method
The Monitor class provides two static methods, enter (object) and exit (object). The two methods take an object as a parameter, which provides an easy way to uniquely identify the resource that will be accessed synchronously. When a thread calls the Enter () method, it waits to gain exclusive access to the referenced object (it waits only when another thread has the power). Once the power is acquired and used, the thread can invoke the exit () method on the same object to release the power.
A thread can call enter () multiple times on the same object, releasing exclusive access as long as the same number of exit () is called on the same object.
A thread can also have exclusive rights to multiple objects at the same time, but this creates a deadlock situation.
The Enter () and exit () methods must never be called on an instance of a value type.
Regardless of what happened, exit () must be called in the finally clause to release all exclusive access rights.
If, in example 5-5, one thread does not want to do the counter once and the other thread does not want to counter by 2, we have to replace the use of the Interlocked class with the Monitor class. The code for F1 () and F2 () will change to the following:
Example 5-6[1]
It is easy to think of counter instead of typeof (program), but counter is a static member of a value type. It is important to note that the square and multiplication operations do not satisfy the commutative law, so the end result of counter is non-deterministic.
5.6.2 C # 's lock keyword
The C # language provides a more concise choice than using the Enter () and exit () methods with the Lock keyword. Our program can be rewritten to look like this:
Example 5-7
Like for and if keywords, curly braces are no longer required if the block defined by the Lock keyword contains only one instruction. We can rewrite it again as:
Using the Lock keyword directs the C # compiler to create a corresponding try/finally block so that you can still anticipate any exceptions that might be thrown. You can use the reflector or Ildasm.exe tool to verify this.
5.6.3 SyncRoot Mode
As in the previous example, we typically use the monitor class in a static method to match an instance of a type class. Similarly, we often use the This keyword in a non-static method to achieve synchronization. In both cases, we synchronize ourselves with an object that is visible outside the class. If the other parts of the code also use these objects to achieve their own synchronization, there will be problems. To avoid this potential problem, we recommend the use of a private member named SyncRoot, which is of type object, and whether the member is static or non-static depends on the need.
Example 5-8
The System.Collections.ICollection interface provides an object type of syncroot{get;} Property. Most of the collection classes (generics or non-generics) implement the interface. Similarly, you can use this property to synchronize access to the elements in the collection. But here the SyncRoot pattern is not really applied, because the objects we use to synchronize access are not private.
Example 5-9
5.6.4 Thread Safety Class
If each instance of a class cannot be accessed by more than one thread at the same time, the class is called a thread-safe class. In order to create a thread-safe class, we only need to apply the SyncRoot pattern we have seen to the methods it contains. If a class wants to become thread-safe without burdening the code in the class, then a good way to do this is to provide it with a thread-safe inheritance class as follows.
Example 5-10
Another approach is to use System.Runtime.Remoting.Contexts.SynchronizationAttribute, which we will discuss later in this chapter.
5.6.5 Monitor.TryEnter () method
This method is similar to enter () except that it is non-blocking. If the exclusive access to the resource has already been occupied by another thread, the method returns a false return value immediately. We can also call the TryEnter () method to block a finite amount of time in milliseconds. Because the return result of the method is not deterministic, and the power must be released in the finally clause after exclusive access is granted, it is recommended that the function being called be exited immediately when TryEnter () fails:
Example 5-11[2]
The wait () method for the 5.6.6 Monitor class, the Pulse () method, and the PulseAll () method
The Wait (), Pulse () and PulseAll () methods must be used together and need to be combined with a small scene to be correctly understood. Our idea is this: a thread obtains exclusive access to an object, and it decides to wait (by calling wait ()) until the object's state changes. To do this, the thread must temporarily lose exclusive access to the object in order for another thread to modify the state of the object. The thread that modifies the state of the object must use the pulse () method to notify the waiting thread that the modification is complete. Here is a small scenario that illustrates this situation.
- The T1 thread that owns exclusive access to the Obj object, calls the Wait (OBJ) method to register itself in the passive wait list of the Obj object.
- Because of the above invocation, T1 loses exclusive access to obj. Therefore, another thread T2 gain exclusive access to obj by calling enter (obj).
- T2 finally modified the state of OBJ and called Pulse (obj) to notify the modification. This call will cause the first thread in the obj passive wait list (here is T1) to be moved to the top of the active wait list of obj. Once the exclusive access of obj is freed, the first thread in obj's active wait list is guaranteed to get that power. It then exits the wait state from the Wait (OBJ) method.
- In our scenario, T2 calls exit (obj) to release exclusive access to OBJ, and then T1 restores access and exits from the Wait (obj) method.
- PULSEALL () will cause all threads in the passive wait list to be transferred to the active wait list. Note that these threads will arrive in a non-blocking state in the order in which they call wait ().
If Wait (obj) is called by a thread that calls multiple enter (obj), the thread will need to invoke the same number of exit (obj) to release access to OBJ. Even in this case, another thread invokes pulse (OBJ) enough to turn the first thread into a non-blocking state.
The following program demonstrates this function by pinging and pong two threads alternately using the access rights of a ball object.
Example 5-12
The program outputs (in an indeterminate manner):
The pong thread does not end and still blocks on the wait () method. Because the Pong thread was the second to gain exclusive access to the ball object, the result was caused.
The difference between a process and a thread