Multi-thread lock system (II)-volatile, Interlocked, ReaderWriterLockSlim, interlocked

Source: Internet
Author: User

Multi-thread lock system (II)-volatile, Interlocked, ReaderWriterLockSlim, interlocked

 

Introduction

The previous chapter mainly describes the direct use of exclusive locks. However, it is too wasteful to use all the locks in practice, or the granularity of exclusive locks is too large. This time we will talk about upgrading locks and atomic operations.

 

Directory

1: volatile

2: Interlocked

3: ReaderWriterLockSlim

4: Conclusion

 

I. volatile

To put it simply, the volatile keyword tells c # compiler and JIT compiler not to cache any fields marked by volatile. Make sure that all fields read and write are atomic and the latest value.

Isn't that a lock? It is not a lock, and its atomic operations are based on the CPU itself, non-blocking. The 32-bit CPU executes the value assignment command, and the maximum data transmission width is 4 bytes.

Therefore, as long as the read and write operations are less than 4 bytes, 32-bit CPUs are atomic operations. Volatile uses this feature.

Cruel fact? Otherwise, the Microsoft algorithm caches some data (under multiple threads) to improve JIT performance efficiency ).

 

// Public volatile Int32 score1 = 1; // public volatile Int64 score2 = 1;

Looking at the example above, we can't define 8 bytes of score2. Because the 8-byte CPU, 32-bit CPU is divided into two commands for execution. Naturally, atomic operations cannot be guaranteed.

I forgot what to do with such details. As a result, Microsoft beat a single stick to death. volatile can be used only for fields with a limit of four bytes or less. For details, refer to msdn.

 

So today I know. I changed the compilation platform to 64-bit. I only use volatile int64 for 64-bit CPUs. Is that okay? No. The Compiler reports an error. I said I was killed with a single stick ..

(^. _. ^) Okay, you can actually use IntPtr.

 

Volatile is very useful in most cases. After all, the performance overhead of the lock is huge. We can regard it as a lightweight lock and use it properly according to specific scenarios, which can improve the performance of many programs.

Thread. VolatileRead and Thread. VolatileWrite in the Thread are complex versions of volatile.

 

Ii. Interlocked

MSDN Description: Provides atomic operations for variables shared by multiple threads. The main functions are as follows:

Interlocked. Increment: increments the value of a specified variable and stores the result.
Interlocked. Decrement: reduces the value of a specified variable and stores the result.
Interlocked. Add atomic operation. Add two integers and replace the first integer with the sum of the two.

Interlocked. CompareExchange (ref a, B, c); for atomic operations, the parameter is compared with the c parameter. Equal B replaces a, but not equal.

 

I will not talk about the basic usage. Here is an example of CLR via C # interlock anything:

Public static int Maximum (ref int target, int value) {int currentVal = target, startVal, desiredVal; // do {startVal = currentVal; // record the initial values of loop iterations. DesiredVal = Math. Max (startVal, value); // calculate the expected value of desiredVal Based on startVal and value // when the thread is preemptible, the target value changes. // If the target startVal value is equal, it is not changed. DesiredVal is directly replaced. CurrentVal = Interlocked. CompareExchange (ref target, desiredVal, startVal);} while (startVal! = CurrentVal); // if the value is not equal, the target value has been changed by other threads. Spin continues. Return desiredVal ;}

 

 

Iii. ReaderWriterLockSlim

If we have cache data A, and no matter what operation we lock each time, then my cache A will always be able to read and write in A single thread, this is intolerable in high Web concurrency.

Is there a way for me to enter the exclusive lock only when writing data? Is there no limit on the number of threads during read Operations? The answer is our ReaderWriterLockSlim.

One of ReaderWriterLockSlim's locks, EnterUpgradeableReadLock, is the most critical to upgrading the lock.

It allows you to first enter the read lock, and find that the cache A is different, then enter the write lock, write back read lock mode.

Ps: note that a ReaderWriterLock before net 3.5 has poor performance. We recommend that you use the upgraded version of ReaderWriterLockSlim.

// Instance read/write lock ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim (LockRecursionPolicy. SupportsRecursion );

In the above example, a read/write lock is used. Here, the enumeration of constructor is used.

LockRecursionPolicy. NoRecursion is not supported and an exception is thrown when recursion is detected.

LockRecursionPolicy. SupportsRecursion supports recursive mode. The thread lock continues to use the lock.

            cacheLock.EnterReadLock();            //do                 cacheLock.EnterReadLock();                //do                cacheLock.ExitReadLock();            cacheLock.ExitReadLock();

This mode is easy to deadlock, for example, the write lock is used in the read lock.

      cacheLock.EnterReadLock();            //do               cacheLock.EnterWriteLock();              //do              cacheLock.ExitWriteLock();            cacheLock.ExitReadLock();

 

The following example uses the msdn cache and adds a simple comment.

Public class SynchronizedCache {private ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim (); private Dictionary <int, string> innerCache = new Dictionary <int, string> (); public string Read (int key) {// enter the read lock, which allows all other read threads and the write thread is blocked. CacheLock. enterReadLock (); try {return innerCache [key];} finally {cacheLock. exitReadLock () ;}} public void Add (int key, string value) {// enter the write lock, and all other access operations are blocked. That is, the write exclusive lock. CacheLock. enterWriteLock (); try {innerCache. add (key, value);} finally {cacheLock. exitWriteLock () ;}} public bool AddWithTimeout (int key, string value, int timeout) {// timeout setting. If other write locks are not released within the timeout period, the operation is abandoned. If (cacheLock. tryEnterWriteLock (timeout) {try {innerCache. add (key, value);} finally {cacheLock. exitWriteLock () ;}return true ;}else {return false ;}} public AddOrUpdateStatus AddOrUpdate (int key, string value) {// enter the update lock. At the same time, there can be only one upgradeable lock thread. Write locks, update locks are blocked, but other threads are allowed to read data. CacheLock. enterUpgradeableReadLock (); try {string result = null; if (innerCache. tryGetValue (key, out result) {if (result = value) {return AddOrUpdateStatus. unchanged;} else {// upgrade to write lock, and all other threads are blocked. CacheLock. EnterWriteLock (); try {innerCache [key] = value;} finally {// exit the write lock and allow other read threads. CacheLock. exitWriteLock ();} return AddOrUpdateStatus. updated;} else {cacheLock. enterWriteLock (); try {innerCache. add (key, value);} finally {cacheLock. exitWriteLock ();} return AddOrUpdateStatus. added ;}} finally {// exit the update lock. CacheLock. ExitUpgradeableReadLock () ;}} public enum AddOrUpdateStatus {Added, Updated, Unchanged };}

 

Iv. Summary

In the actual development of multithreading, testing is usually normal. When the production environment is reached, high concurrency will easily lead to problems. Be sure to pay attention.

 

Reference resources

1: CLR via C #

2: MSDN

 

Author: Mr. mushroom

Source: http://www.cnblogs.com/mushroom/p/4197409.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.