Introduced
The above chapter mainly says the direct use of exclusive locks. But all of the actual use of locks and too wasted, or the exclusive lock granularity is too large. This time we're talking about escalation locks and atomic operations.
Directory
1:volatile
2:interlocked
3:readerwriterlockslim
4: summary
One: Volatile
To put it simply: the volatile keyword tells the C # compiler and the JIT compiler not to cache any volatile marked fields. Ensure that the field reads and writes are atomic, up-to-date values.
Isn't that the lock? It is not a lock at all, its atomic operation is based on the CPU itself, non-blocking . The maximum width of the data transfer is 4 bytes because the 32-bit CPU executes the assignment instruction.
so as long as the read-write operation is under 4 bytes, the 32-bit CPUs are atomic operations. volatile It is the use of this feature.
A cruel truth? Otherwise, Microsoft Dafa is designed to improve JIT performance and to cache some data (under multiple threads).
// correct Public volatile 1 ; // Error Public volatile 1;
Looking at the example above, we define a 8-byte length score2. Because of 8 bytes, the 32-bit CPU is divided into 2 instructions executed. Naturally there is no guarantee of atomic manipulation.
Such details, forget how to do, it is not deceptive ah. So Microsoft Dafa directly beat him, limit the Type field below 4 bytes to use volatile, specifically what, see MSDN.
Well, I know today. I compiled the platform to 64-bit, only 64-bit CPU with volatile int64, OK? No, the compiler has an error. Said the beat him.
(^._.^)? Well, you can actually use IntPtr this.
Volatile in most cases is useful, after all, the performance cost of locks is still very large. We can think of as a lightweight lock, according to the specific scenario reasonable use, can improve a lot of program performance.
The Thread.volatileread and thread.volatilewrite in threads are complex versions of volatile.
Two: Interlocked
MSDN Description: Provides atomic operations for variables shared by multiple threads. The main functions are as follows:
Interlocked.Increment the atomic operation, increments the value of the specified variable and stores the result.
Interlocked.decrement the atomic operation, decrements the value of the specified variable and stores the result.
Interlocked.add atomic operation, add two integers and replace the first integer with both
Interlocked.compareexchange (ref A, B, c); Atomic operations, A and C parameter comparisons, equal b replaces a, unequal does not replace.
The basic usage is not much to say. An example of a direct-to-segment CLR via C # interlock anything:
Public Static intMaximum (ref intTargetintvalue) { intCurrentval = target, startval, Desiredval;//value before and after recording Do{startval= Currentval;//records the initial value of the loop iteration. Desiredval = Math.max (startval, value);//calculate expected value based on Startval and value Desiredval//Under High concurrency, the target value is changed when the thread is preempted. //The target startval equals description has not changed. Desiredval Direct replacement. Currentval = Interlocked.compareexchange (refTarget, Desiredval, startval); } while(Startval! = currentval);//unequal indicates that the target value has been altered by another thread. Spin continues. returnDesiredval; }
Three: ReaderWriterLockSlim
If we have a cache of data A, if each time regardless of the operation of the lock, then my cache a will be only single-threaded read and write , which is not tolerated in the web high concurrency.
Is there a way I can only enter an exclusive lock when I write, and not limit the number of threads to read? The answer is our ReaderWriterLockSlim lead, read and write lock.
ReaderWriterLockSlim one of the locking Enterupgradeablereadlock is the most critical to upgrade the lock.
It allows you to enter the read lock first, found that the cache A is not the same, then enter the write lock, write back to read the lock mode .
PS: note here that there is a poor performance of ReaderWriterLock before net 3.5. It is recommended to use the upgraded version of ReaderWriterLockSlim.
// instance a read-write lock New ReaderWriterLockSlim (lockrecursionpolicy.supportsrecursion);
The above example is a read-write lock, which is noted here is an enumeration of constructors.
Lockrecursionpolicy.norecursion is not supported, it is found that recursion throws an exception.
Lockrecursionpolicy.supportsrecursion that supports recursive mode, the line Cheng continues to use the lock.
Cachelock.enterreadlock (); // cachelock.enterreadlock (); // Do Cachelock.exitreadlock (); Cachelock.exitreadlock ();
This mode is extremely easy to deadlock, such as a read lock with a write lock.
Cachelock.enterreadlock (); // cachelock.enterwritelock (); // Do Cachelock.exitwritelock (); Cachelock.exitreadlock ();
Here is an example of a cache that takes MSDN directly, plus a simple comment.
Public classSynchronizedcache {PrivateReaderWriterLockSlim Cachelock =NewReaderWriterLockSlim (); Privatedictionary<int,string> Innercache =Newdictionary<int,string>(); Public stringRead (intkey) { //Enter the read lock, allowing all other read threads to be blocked by the write thread. Cachelock.enterreadlock (); Try { returnInnercache[key]; } finally{cachelock.exitreadlock (); } } Public voidADD (intKeystringvalue) { //into a write lock, all other threads that access the operation are blocked. That is, write an exclusive lock. Cachelock.enterwritelock (); Try{innercache.add (key, value); } finally{cachelock.exitwritelock (); } } Public BOOLAddwithtimeout (intKeystringValueintTimeout) { //Timeout setting, if the other write locks are not released during the timeout period, the operation is discarded. if(Cachelock.tryenterwritelock (timeout)) {Try{innercache.add (key, value); } finally{cachelock.exitwritelock (); } return true; } Else { return false; } } PublicAddorupdatestatus AddOrUpdate (intKeystringvalue) { //Enter the upgrade lock. There can be only one upgradeable lock thread at a time. Write locks, upgrade locks are blocked, but other threads are allowed to read the data. Cachelock.enterupgradeablereadlock (); Try { stringresult =NULL; if(Innercache.trygetvalue (Key, outresult)) { if(Result = =value) { returnaddorupdatestatus.unchanged; } Else { //upgrade to write lock, all other threads are blocked. Cachelock.enterwritelock (); Try{Innercache[key]=value; } finally { //exits the write lock, allowing other read threads. Cachelock.exitwritelock (); } returnaddorupdatestatus.updated; } } Else{cachelock.enterwritelock (); Try{innercache.add (key, value); } finally{cachelock.exitwritelock (); } returnaddorupdatestatus.added; } } finally { //exit the upgrade lock. Cachelock.exitupgradeablereadlock (); } } Public enumaddorupdatestatus {Added, Updated, unchanged}; }
Four: summary
Multithreading in the actual development, often test no problem, one to the production environment, high concurrency is prone to problems. Be sure to pay attention.
V: Reference Resources
1:CLR via C #
2:msdn
Locking system in multi-threading (ii)-volatile, interlocked, ReaderWriterLockSlim