Multithreading: C # thread synchronization lock, monitor, mutex, synchronization event and wait handle

Source: Internet
Author: User
Tags mscorlib net thread

From: http://www.cnblogs.com/freshman0216/archive/2008/07/29/1252253.html
This article starts with the class relationship diagram of monitor, mutex, manualresetevent, autoresetevent, and waithandler. We hope to have a general understanding of common thread synchronization methods through this introduction, the application details of each method are not explained too much. Let's take a look at the relationships between these classes:

 

1. Lock keyword

Lock is the C # keyword. It marks the statement block as a critical section to ensure that when one thread is located in the critical section of the Code, the other thread does not enter the critical section. If other threads attempt to enter the locked code, it waits until the object is released. The method is to obtain the mutex lock of a given object, execute the statement, and then release the lock.

Msdn provides considerations for using lock. Generally, you should avoid locking the public type. Otherwise, the instance will be out of the control scope of the Code. Common structures such as lock (this), lock (typeof (mytype), and lock ("mylock") violate this rule.

1) if the instance can be accessed by the public, the lock (this) problem will occur.

2) If mytype can be publicly accessed, the lock (typeof (mytype) problem will occur. Because all instances of a class have only one type object (this object is the return result of typeof ), lock it to lock all instances of the object. At present, Microsoft does not recommend using lock (typeof (mytype) because locking type objects is a very slow process, other threads in the class, or even other programs running in the same application domain, can access this type of object. Therefore, they may be used to lock the type object instead of you, it completely blocks your execution and causes your own code to be suspended.

3) any other code using the same string in the process will share the same lock, so the lock ("mylock") problem occurs. This problem is related to the mechanism by which. NET Framework creates strings. If both string values are "mylock", the memory points to the same string object.

 

The best practice is to define private objects to lock, or private static object variables to protect the data shared by all instances.

Let's take a look at the essence of the lock keyword through il dasm. Below is a simple test code:

Lock (lockobject)
{
Int I = 5;
}

 

Use il dasm to open the compiled file. The above statement block generates the Il code:

Il_0045: Call void [mscorlib] system. Threading. Monitor: enter (object)
Il_004a: NOP
. Try
{
Il_004b: NOP
Il_004c: LDC. i4.5
Il_004d: stloc.1
Il_004e: NOP
Il_004f: Leave. s il_0059
} // End. Try
Finally
{
Il_0051: ldloc.3
Il_0052: Call void [mscorlib] system. Threading. Monitor: exit (object)
Il_0057: NOP
Il_0058: endfinally
} // End Handler

Through the above code, we can clearly see that the lock keyword is actually an encapsulation of the enter () and exit () Methods of the monitor class, and through try... catch... the finally statement block ensures that monitor is executed after the lock statement block ends. exit () method to release the mutex lock.

2. Monitor class

The monitor class grants an object lock to a single thread to control access to the object. Object locks provide the ability to restrict access to critical zones. When a thread has an object lock, no other thread can obtain the lock. You can also use monitor to ensure that no other thread is allowed to access the application code section being executed by the lock owner, unless another thread is using another lock object to execute the code.

Through the analysis of the lock keyword, we know that lock is an encapsulation of the enter and exit of monitor, and it is more concise to use. Therefore, the enter () and exit () of the monitor class () the combination of methods can be replaced by the lock keyword.

In addition, the monitor class has several common methods:

Tryenter () can effectively solve problems such as long-term crashes. If tryenter is used frequently in a concurrent and persistent environment, it can effectively prevent deadlocks or long waits. For example, you can set a waiting time bool gotlock = monitor. tryenter (myobject, 1000) so that the current thread can decide whether to continue the following operations based on the returned bool value after waiting for 1000 seconds.

Wait () releases the lock on the object to allow other threads to lock and access the object. When other threads access an object, the calling thread will wait. The pulse signal is used to notify the waiting thread of changes to the object state.

Pulse (), pulseall () sends signals to one or more waiting threads. This signal indicates that the status of the waiting thread lock object has changed, and the lock owner is ready to release the lock. Wait for the thread to be placed in the ready queue of the object so that it can finally receive the object lock. Once the thread has a lock, it can check the new State of the object to see if it has reached the desired state.

Note: The pulse, pulseall, and wait methods must be called from the synchronized code block.

Let's assume a situation: Mom makes a cake, and the child is a little embarrassed. Mom needs to eat every piece of cake she prepared. After the mother prepared a piece, she told the child that the cake was ready. The following example uses the wait and pulse methods of the monitor class to simulate a child's eating cake.

 

 

The purpose of this example is to understand how wait and pulse ensure thread synchronization, and pay attention to the differences between wait (obeject) and wait (object, INT) methods, the key to understanding the differences between them is to understand that the synchronization object contains several references, including references to the thread with the current lock and the ready queue (including the thread preparing to obtain the lock) and the reference to the waiting queue (including the threads waiting for the notification of object status changes.

This article continues to introduce the waithandler class and its sub-classes mutex, manualresetevent, and autoresetevent usage .. Net Thread Synchronization Methods are dazzling. How can this problem be understood? In fact, let's leave it alone. in the. NET environment, thread synchronization is nothing more than two types of operations: one is mutex/lock, the purpose is to ensure the "atomicity" of code operations in the critical section; the other is traffic signal operations, the purpose is to ensure that multiple threads are executed in a certain order. For example, the producer thread must be executed before the consumer thread .. The thread synchronization class in net is nothing more than encapsulation of the two methods. In the final analysis, the goal can be attributed to the mutual exclusion, lock, or traffic signal, but they are not applicable in some cases. The following describes the waithandler and its sub-classes based on the class hierarchy.

1. waithandler

Waithandle is the ancestor of mutex, semaphore, eventwaithandler, autoresetevent, and manualresetevent. It encapsulates the kernel objects of Win32 synchronization handles, that is, the managed versions of these kernel objects.

The thread can block a single Wait handle by calling the waithandler instance method waitone. In addition, the waithandler class reloads the static method to wait until all specified wait handles have collected the signal waitall, or wait for a specified wait handle to collect the signal waitany. These methods provide the opportunity to discard the wait timeout interval, exit the synchronization context before entering the wait, and allow other threads to use the synchronization context. Waithandler is in C #Abstract class, Cannot be instantiated.

2. eventwaithandler vs. manualresetevent vs. autoresetevent (synchronization event)

Let's take a look at the implementation of manualresetevent and autoresetevent In the. NET Framework:

 

Originally, both manualresetevent and autoresetevent are inherited from eventwaithandler. The only difference between them is that the eventresetmode parameter of the eventwaithandler class is different. In this way, when the eventresetmode parameter value is different, the eventwaithandler class controls the synchronization behavior of threads, so the two subclasses are clear. For ease of description, we will not introduce the two modes of the parent class, but will directly introduce the Child class.

Manualresetevent and autoresetevent have the following commonalities:
1) The set method sets the event state to the terminated state, allowing one or more threads to continue; the reset method sets the event State to a non-terminated State, leading to thread blocking; waitone blocks the current thread, until the waithandler of the current thread receives the event signal.
2) the initial state can be determined by the parameter value of the constructor. If it is true, the event is terminated so that the thread is not blocked. If it is false, the thread is blocked.
3) if a thread calls the waitone method, when the event state is terminated, the thread will receive a signal and continue to execute.

Differences between manualresetevent and autoresetevent:
1) autoresetevent. waitone () allows only one thread to enter each time. When a thread receives a signal, autoresetevent automatically sets the signal to "not sent". Other threads that call waitone only need to wait, that is to say, autoresetevent only wakes up one thread at a time;
2) manualresetevent can wake up multiple threads, because when a thread calls manualresetevent. after the Set () method, other threads that call waitone can continue to execute the signal, while manualresetevent does not automatically set the signal to not send.
3) that is to say, unless the manualresetevent. Reset () method is manually called, manualresetevent will remain in a signal state, and manualresetevent will be able to wake up multiple threads to continue execution at the same time.

Example scenario: Zhang San and Li Si, two good friends, went to a restaurant to have dinner. They ordered a piece of kung pao chicken Ding, which took some time to complete. Zhang San and Li Si didn't want to be stupid, and they all played mobile games with their own aspirations, the waiter will definitely call us if I want to finish the work. After the waiter served the food, Michael Jacob began to enjoy the delicious food. after eating the food, they asked the waiter to pay for the food. We can abstract three threads from this scenario: three threads, three threads, four threads, and the waiter threads. They need to be synchronized: serving as a waiter-> Michael Zhang and Mr. Li start to enjoy kung pao chicken-> after dinner, ask the waiter to pay the bill. What is the use of this synchronization? Manualresetevent or autoresetevent? The above analysis shows that we should use manualresetevent for synchronization. The following is the program code:

 


Check the running result after compilation. The console output is as follows:
Waiter: the cook is cooking. Please wait...
Michael Zhang: waiting for the food to be served.
Li Si: waiting for the food to be boring.
Michael Zhang: waiting for the food to be served.
Li Si: waiting for the food to be boring.
Waiter: kung pao chicken
Michael Zhang: start eating kung pao chicken
Li Si: start eating kung pao chicken
Michael Zhang: I have eaten the chicken.
Li Si: the chicken is eaten up.
Waiter: pay for the order.

What if I use autoresetevent for synchronization? What will happen? I'm afraid Zhang and Li are about to fight. One of them is enjoying the delicious kung pao chicken, and the other is still playing games when it comes to paying the bills. If you are interested, you can remove the comments of the line of code and comment out the following line of code to see what the result will appear in the running sequence.

3. mutex (mutex)

Mutex and eventwaithandler share the same parent class waithandler, and their synchronous function usage is similar. Mutex features exclusive access to resources across application domain boundaries, that is, it can be used to synchronize threads in different processes, of course, this feature is at the expense of more system resources.

The first two articles briefly introduce the basic usage of thread synchronization lock, monitor, synchronization event eventwaithandler and mutex. On this basis, we will compare their usage, it also provides some suggestions on when to lock and when not needed. Finally, we will introduce several thread-safe classes in FCL and the locking methods of collection classes to improve and supplement the thread synchronization series.

1. Differences between several Synchronization Methods

Lock and monitor are. net is implemented with a special structure. The monitor object is fully hosted and portable, and it may be more effective in terms of operating system resource requirements and fast synchronization, however, cross-process synchronization is not allowed. Lock (encapsulation of the monitor. Enter and monitor. Exit methods) is mainly used to lock the critical section, so that the code in the critical section can only be executed by the thread that gets the lock. Monitor. Wait and monitor. Pulse are used for thread synchronization. They are similar to signal operations. I personally feel that they are complicated to use and may cause deadlocks.

Mutex and event object eventwaithandler belong to the kernel object. When using the kernel object for thread synchronization, the thread must be switched between the user mode and the kernel mode, so the efficiency is generally low, however, kernel objects such as mutex objects and event objects can be synchronized between various threads in multiple processes.

Mutex is similar to a baton. The thread that obtains the baton can start running. Of course, the baton only belongs to one thread (thread affinity) at a time. If this thread does not release the baton (mutex. releasemutex), so there is no way, all other threads that need to run the baton know that they can wait to watch the fun.

The eventwaithandle class allows threads to communicate with each other by sending signals. Generally, one or more threads are blocked on eventwaithandle until an unblocked thread calls the Set Method to release one or more blocked threads.

2. When to lock

First, we must understand that locking solves the competition condition, that is, multiple threads access a resource at the same time, resulting in unexpected results. For example, the simplest case is that if one counter and two threads add one at the same time, the result is that one count is lost, but frequent locks may cause performance consumption, there are also terrible deadlocks. So under what circumstances do we need to use locks and under what circumstances do we not need them?

1) only shared resources need to be locked
Only shared resources that can be accessed by multiple threads need to be locked, such as static variables, and some cached values. variables inside the thread do not need to be locked.

2) use more lock and less mutex
If you must use a lock, try not to use the lock mechanism of the kernel module, for example. net mutex, semaphore, autoresetevent and manuresetevent. Using such a mechanism involves switching between the user mode and the kernel mode, resulting in a lot of poor performance, but their advantage is that they can synchronize threads across processes, so we should clearly understand their differences and applicability.

3) understand how your program runs
In fact, most of the logic in Web development is expanded in a single thread, and a request is processed in a separate thread. Most of the variables belong to this thread, there is no need to consider locking, of course for ASP. net, we need to consider locking the data in the Application object.

4) give the lock to the database
In addition to data storage, the Data Base also has an important purpose: synchronization. The database itself uses a complex mechanism to ensure data reliability and consistency, this saves us a lot of energy. With the synchronization on the data source header ensured, most of our effort can be concentrated on the synchronous access to cache and other resources. Generally, locking is considered only when multiple threads are involved in modifying the same record in the database.

5) business logic requirements on transaction and thread security
This is the most fundamental thing. It is time-consuming and laborious to develop programs with full thread security. In cases involving financial systems such as e-commerce, many logics must be strictly thread-safe, so we have to sacrifice some performance and a lot of development time to do this. In general applications, although the program is in the risk of competition in many cases, we can still do not use locking. For example, in some cases, if one or more counters are missing and the results are harmless, we can leave it alone.

3. Interlocked class

The interlocked class provides synchronous access to variables shared by multiple threads. If the variable is in the shared memory, the thread of different processes can use this mechanism. The Interlock operation is atomic, that is, the entire operation cannot be interrupted by another interlock operation on the same variable. This is very important in the preemptive multi-threaded operating system. In such an operating system, A thread can be suspended after a value is loaded from a memory address but before it can be changed or stored.

Let's look at an example of interlock. increment (). This method increments the specified variable in the form of atoms and stores the results. The example is as follows:


Class interlockedtest
{
Public static int64 I = 0;

Public static void add ()
{
For (INT I = 0; I <100000000; I ++)
{
Interlocked. increment (ref interlockedtest. I );
// Interlockedtest. I = interlockedtest. I + 1;
}
}

Public static void main (string [] ARGs)
{
Thread T1 = new thread (New threadstart (interlockedtest. Add ));
Thread t2 = new thread (New threadstart (interlockedtest. Add ));

T1.start ();
T2.start ();

T1.join ();
T2.join ();

Console. writeline (interlockedtest. I. tostring ());
Console. Read ();
}
}

 

Output result 200000000: If the interlockedtest. Add () method replaces the interlocked. increment () method with a comment-out statement, the result is unpredictable and the execution result is different each time. The interlockedtest. Add () method ensures the atomicity of the add 1 operation. The function is equivalent to automatically using the lock for the add operation. At the same time, we also noticed that interlockedtest. Add () is much time-consuming than adding 1 with the + sign directly. Therefore, the lock resource loss is obvious.

In addition, the interlockedtest class has several common methods. For specific usage instructions, refer to the introduction on msdn.

4. Synchronization of collection classes

. NET provides a Lock Object syncroot in some collection classes, such as queue, arraylist, hashtable, and stack. The source code for viewing the syncroot attribute (stack. synchroot is slightly different) with reflector is as follows:


Public Virtual Object syncroot
{
Get
{
If (this. _ syncroot = NULL)
{
// If _ syncroot and null are equal, assign the new object to _ syncroot
// The interlocked. compareexchange method ensures that multiple threads are thread-safe when syncroot is used.
Interlocked. compareexchange (ref this. _ syncroot, new object (), null );
}
Return this. _ syncroot;
}
}

It is important to note that enumeration of a set from start to end is not a thread-safe process. Even if a set has been synchronized, other threads can modify the set, which causes an exception in the number of enumerations. To ensure thread security during enumeration, you can lock the set during the enumeration process or catch exceptions caused by changes made by other threads. Use the following code:


Queue q = new Queue ();
Lock (Q. syncroot)
{
Foreach (Object item in Q)
{
// Do something
}
}

Another note is that the collection class provides a synchronization-related method synchronized, which returns a corresponding Wrapper class of the Collection class, which is thread-safe, most of his methods use the lock keyword for synchronous processing. For example, if synchronized of hashtable returns a new thread-safe hashtable instance, the Code is as follows:


// In a multi-threaded environment, we only need to instantiate hashtable in the following way.
Hashtable ht = hashtable. synchronized (New hashtable ());

// The following code is implemented by the. NET Framework class library to increase your understanding of synchronized.
[Hostprotection (securityaction. linkdemand, synchronization = true)]
Public static hashtable synchronized (hashtable table)
{
If (Table = NULL)
{
Throw new argumentnullexception ("table ");
}
Return new synchashtable (table );
}

 
// Several Common synchashtable methods. We can see that the lock keyword is added to the internal implementation to ensure thread security.
Public override void add (Object key, object value)
{
Lock (this. _ table. syncroot)
{
This. _ table. Add (Key, value );
}
}

Public override void clear ()
{
Lock (this. _ table. syncroot)
{
This. _ table. Clear ();
}
}

Public override void remove (Object key)
{
Lock (this. _ table. syncroot)
{
This. _ table. Remove (key );
}
}

Thread Synchronization is a very complex topic. Here we just sort out the relevant knowledge based on a company project as a summary of our work. What are the application scenarios of these Synchronization Methods? What are the differences? Further study and practice are required.

Reference from: http://www.haogongju.net/art/630286

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.