C # Multithreading practices-Lock and thread safety

Source: Internet
Author: User

Locks implement mutually exclusive access to ensure that only one thread can enter a particular code fragment at the same time, considering the following class:

Class Threadunsafe {    static int val1, val2;    static void Go () {        if (val2! = 0) Console.WriteLine (val1/val2);            val2 = 0;    }}

This is not thread-safe: If the Go method is called at the same time by two threads, you may get an error with a divisor of zero in one thread, because the Val2 may be set to zero by one thread, and the other thread will just execute to the IF and Console.WriteLine statements.

The following fixes the problem with the lock in C #:

Class ThreadSafe {    static object locker = new Object ();    static int val1, val2;    static void Go () {      lock (locker) {      if (val2! = 0) Console.WriteLine (val1/val2);          val2 = 0;          }    } }

At the same time only one thread can lock the synchronization object (here is locker), any other competing threads will be blocked until the lock is released. If there are more than one thread competing for this lock, they will form a queue called "Ready queue" to authorize the lock on a first-come, first-served basis. Because one thread's access cannot overlap with another, the mutex is sometimes called to force serialization access to the content protected by the lock. In this example, the logic of the Go method is protected, as well as the logic of the Val1 and Val2 fields. A thread waiting for a competition lock will be blocked from ThreadState on the WaitSleepJoin state. Later, a thread is discussed to invoke the interrupt or abort method through another thread to forcibly be freed. This is a fairly efficient technique for ending a worker thread. The lock statement in C # is actually a shorthand for calling Monitor.Enter and Monitor.Exit, with try-finally statements in the middle, following the go method actually occurring in the previous example:

Monitor.Enter (Locker);  try {    if (val2! = 0) Console.WriteLine (val1/val2);  Val2 = 0;} finally {
Monitor.Exit (Locker); }

On the same object, calling Monitor.Exit before the first monitor.ente is called throws an exception. Monitor also provides a TryEnter method to implement a timeout function--also in milliseconds or a timespan, if a lock is obtained that returns true, and vice versa does not get the return false. TryEnter can also have no timeout parameters, "test" the lock, and if the lock cannot be obtained, it will be timed out immediately.

Selecting a synchronization object

Any object that is visible to all threads that are related can be used as a synchronization object, but to satisfy a hard rule: it must be a reference type. It is recommended that synchronization objects be privately held inside a class (such as a private instance field) to prevent inadvertent locking of the same object from the outside. When these rules are met, the synchronization object can be both object and protection. For example, the following list:

Class ThreadSafe {        list <string> list = new list <string> ();         void Test () {         lock (list) {         list. ADD ("Item 1");         ...

A specialized field (as in the example of locker) is a common way because it can precisely control the range and granularity of locks. Use the type of the object or class itself as a synchronization object, namely:

Lock (This) {...}

Or:

Lock (typeof (Widget)) {...}    Securing access to Static

is not good because there is a potential risk that these objects can be accessed in a public scope.

The lock does not prevent access to the synchronization object itself in any way, in other words, x.tostring () is not blocked by another thread calling lock (x).

Nested locks

Threads can repeatedly lock the same object and can be implemented by invoking Monitor.Enter or lock statements multiple times. When the corresponding number of monitor.exit is called or the outermost lock statement is complete, the object is unlocked at that moment. This allows the simplest syntax to implement a method of locking a lock to invoke another lock:

Static object x = new object (); static void Main () {     lock (x)    {        Console.WriteLine ("I have the Lock");        Nest ();        Console.WriteLine ("I still have the lock");    }    In this lock is released}static void Nest () {    lock (x)    {         ...     }           Release the lock? Not completely released! }

A thread can only be blocked at the beginning of the lock or the outermost lock.

When to lock

As a basic rule, any fields related to multithreading that read and write should be locked. Even the most mundane thing-the assignment operation of a single field must take into account the synchronization problem. In the following example, increment and assign are not thread-safe:

Class Threadunsafe {    static int x;    static void Increment () {x + +;}    static void Assign () {x = 123;}}

The following is a thread-safe version of increment and Assign:

Class threadunsafe{    static Object locker = new Object ();    static int x;    static void Increment () {lock (Locker) x + +;}    static void Assign () {lock (locker) x = 123;}}

As another option for locking, in some simple cases you can also use non-blocking synchronization, which will be discussed later even if a statement like this needs to be synchronized.

Lock and atomic operations

If there are many variables that are always read and written in some locks, then you can call them atomic operations. We assume that X and Y are continuously read and assigned, and they are locked inside the lock via locker:

Lock (Locker) {if (x! = 0) y/= x;}

You can assume that x and y are accessed atomically, because the code snippet is not separated or preempted by other threads, and the other threads change x and y to be invalid output, you never get the divisor zero error, guaranteed that x and Y are always accessed by the same exclusive lock.

Performance considerations

The lock itself is very fast, and a lock is normally only dozens of nanoseconds (one-zero seconds) without clogging. If a blockage occurs, the cost of the task switch is close to the range of microseconds (one out of 10,000 seconds), although it can take milliseconds (1 per thousand seconds) before the actual scheduled time of the thread reorganization. Instead, the use of locks can lead to longer time overhead. If a deadlock and a competition lock occur, the lock can be counterproductive because too much code is placed in the lock statement, causing the other threads to be unnecessarily blocked. Deadlocks are content that two threads wait on each other to be locked, causing both to fail. A contention lock is a content that can be locked by either of two threads, which causes a program error if the "wrong" thread acquires a lock.

For too many synchronization objects deadlocks are very prone to symptoms, a good rule is to start with fewer locks, and in a trusted case involving excessive blocking, increase the granularity of the lock.

Thread Safety

Thread-safe code means that there are no uncertainties about this code in the face of any multithreaded situation. Thread safety completes the lock first, and then reduces the likelihood of interaction between threads.

A thread-safe method that can be re-entered in any case. Common types are rarely thread-safe for the following reasons:

      • Full thread-safe development is important, especially if a type has many fields (in which every field in any multithreaded context has a potential interaction).
      • Thread safety brings a performance penalty (to some extent whether or not the type is used for multithreading).
      • A thread-safe type does not necessarily make the program use thread-safe, and sometimes participates in the latter to make the former redundant.

So thread safety is often implemented only where it needs to be implemented in order to handle a particular multithreaded situation. However, there are some ways to "cheat" that there are large and complex classes that run safely in multithreaded environments. One is to sacrifice granularity with large segments of code-even accessing global objects in an exclusive lock, forcing serialization access at a higher level. This strategy is also critical to allow non-thread-safe objects to be used in thread-safe code, avoiding the same mutex being used to protect access to all properties, methods, and fields on non-thread-safe objects. Except for the original type, few. NET framework type instances are thread-safe compared to concurrent read-only access. The responsibility of the open personnel is to implement thread-safe representation using mutexes. Another way to cheat is to minimize the threading interaction by minimizing the sharing of data. This is a good way to be used secretly in a "weak state" of middle-tier programs and Web servers. Since multiple client requests arrive at the same time, each request comes from its own thread (for the Asp.net,web server or remote architecture), which means that the methods they invoke must be thread-safe. Weak state design, which is popular because of its scalability, essentially limits the ability to interact, so classes cannot persist data between requests. Thread interaction is limited to static fields that can be selected for creation, mostly by caching common data in memory and providing infrastructure services such as authentication and auditing.

Thread safety and. NET Framework Types

Locking can be used to convert non-thread-safe code into thread-safe code. A good example is that in the. NET framework, almost all instances of non-basic types are not thread-safe, and they can be used in multithreaded code if all access to a given object is protected by a lock. Looking at this example, two threads add an entry for the same list at the same time, and then enumerate it:

Class threadsafe{    static list <string> list = new list <string> ();    static void Main ()    {        new Thread (Additems). Start ();        New Thread (Additems). Start ();    }     static void Additems ()     {for         (int i = 0; i < i++)        lock (list) list. ADD ("Item" + list.) Count);        string[] items;        Lock (list) items = list. ToArray ();        foreach (string s in Items) Console.WriteLine (s);    }}

In this case, we have locked the list object itself, and this simple scheme is good. If we had two related lists, perhaps we would have to lock in a common goal-a single field, and if no other list appeared, it would be a wise choice to lock it up. Enumeration. NET is not thread-safe, and an exception is thrown when another thread changes the list at enumeration time. In order not to lock the enumeration process directly, in this case we first copy the item into the array, which avoids pinning the lock because we have a potentially time-consuming enumeration process.

Here's an interesting hypothesis: Imagine if the list is actually thread-safe, how to solve it? The code will be few! For example, we say we want to add a project to our hypothetical thread-safe list, as follows:

if (!mylist.contains (NewItem)) Mylist.add (NewItem);

Whether or not the list is thread-safe, this statement is obviously not! (therefore, the general collection class that can be said to be safe is basically nonexistent.) in net4.0, Microsoft provides a set of thread-safe parallel collection classes, but all are specially processed and access methods are qualified. ), the above statement is thread-safe, and the entire if statement must be placed in a lock to protect the preemption between the judgment and the addition of the new. The above lock needs to be used for any place where we need to modify the list, such as the following statement needs to be included in the same lock:

Mylist.clear ();

To ensure that it does not preempt the previous statement, in other words, we have to lock in almost all non-thread-safe collection classes. Built-in thread safety, obviously a waste of time!

When writing custom components, you might object to the idea of why building thread safety makes it easier to make the results redundant.

There is an argument that a custom lock on an object package works only when all parallel threads know and use the lock, and if the lock object is in a larger range, the lock object may not be within the lock range. The worst-case scenario is that static members appear in public types, such as imagining static structures on DateTime, DateTime.Now is not thread-safe, and when there are 2 concurrent calls that can lead to a garbled output or exception, the remedy is to lock it outside and possibly lock its type itself- Lock (typeof (DateTime)) to circle calls to DateTime.Now, which will work, but only when all programmers agree to do so. While this is unreliable, locking a type is considered a very bad thing. For these reasons, static members on DateTime are thread-safe, a common pattern throughout the. NET framework-static members are thread-safe, while one instance member is not. From this mode you can also get some experience when writing custom types, do not create a problem that cannot be thread-safe!

When it comes to writing common components, the good habit is not to forget about thread safety, which means that you should be careful to handle static members that are in it or public.

C # Multithreading practices-Lock and thread safety

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.