Analysis of thread synchronization mechanism in. Net

Source: Internet
Author: User

We know two parallel programming models: message-based and shared memory-based. Some time ago, the project encountered the second problem of using multithreading to develop parallel programs to share resources. Today, we will proceed with the actual case. net shared memory thread synchronization mechanism to make a summary, because some class libraries are the basis of the application, so this does not give a basic explanation, the basic use of MSDN is the best tutorial. I. volatile keywords

Basic Introduction:Encapsulates the implementation of Thread. VolatileWrite () and Thread. VolatileRead (). The main function is to force refresh the cache.

Use Cases:It is suitable for solving the problem of variable synchronization between memory and high-speed cache on multi-core and multi-CPU machines.

Case:Refer to the following two examples of atomic operations or how to implement ConcurrentQueue and ConcurrentDictionary in the System. Collections. Concurrent namespace.

Ii. Interlock)

Basic Introduction:Atomic operations are the basis for implementing the Spinlock, Monitor, and ReadWriterLock locks, the implementation principle is to mark a signal on the computer bus to indicate that the resource has been occupied. If other commands are modified, the operation can be performed only after the operation is completed, because atomic operations are implemented on hardware, the speed is very fast, which is about 50 clock cycles. In fact, atomic operations can also be seen as a lock.

Use Cases:For scenarios with high performance requirements, you need to quickly synchronize fields or perform atomic and new operations on variables (for example, int B = 0; B = B + 1 is actually divided into multiple Assembly commands. Parallel Execution of multiple Assembly commands may lead to incorrect results in the case of multiple threads, therefore, make sure that the Assembly command generated by B = B + 1 is executed in an atomic form), such as implementing a parallel queue or asynchronous queue.

Case:Implementation of an event-based trigger mechanism queue

/// <Summary> /// indicates a real-time processing queue // </summary> public class ProcessQueue <T >{# region [member] private ConcurrentQueue <IEnumerable <T>> queue; private Action <IEnumerable <T> PublishHandler; // specifies the number of threads to be processed. private int core = Environment. processorCount; // Number of running threads private int runingCore = 0; public event Action <Exception> OnException; // whether the queue is processing data private int isProcessing = 0; // whether the queue is available private bool enabled = true; # Endregion # region constructor public ProcessQueue (Action <IEnumerable <T> handler) {queue = new ConcurrentQueue <IEnumerable <T> (); PublishHandler = handler; this. onException + = ProcessException. onProcessException ;} # endregion # region [Method] /// <summary> /// enter the queue // </summary> /// <param name = "items"> data set </param> public void Enqueue (IEnumerable <T> items) {if (items! = Null) {queue. enqueue (items);} // determines whether a queue has a thread processing if (enabled & Interlocked. compareExchange (ref isProcessing, 1, 0) = 0) {if (! Queue. isEmpty) {ThreadPool. queueUserWorkItem (ProcessItemLoop);} else {Interlocked. exchange (ref isProcessing, 0) ;}}/// <summary> // enable queue Data Processing /// </summary> public void Start () {Thread process_Thread = new Thread (PorcessItem); process_Thread.IsBackground = true; process_Thread.Start ();} /// <summary> /// process data items cyclically /// </summary> /// <param name = "state"> </param> private void ProcessItemLoop (o Bject state) {// indicates a thread recursion. When the current data is processed, the next data in the thread processing queue is enabled. The recursive termination condition is that the queue is empty. // However, data may exist in the queue but no when the thread is processing, all the data in the monitoring thread monitoring queue is empty, if it is null // and there is no thread to process it, enable the recursive thread if (! Enabled & queue. isEmpty) {Interlocked. exchange (ref isProcessing, 0); return;} // whether the number of threads processed is smaller than the current number of CPU cores if (Thread. volatileRead (ref runingCore) <= core * 2 *) {IEnumerable <T> publishFrame; // submit it to the thread pool for processing if (queue. tryDequeue (out publishFrame) {Interlocked. increment (ref runingCore); try {PublishHandler (publishFrame); if (enabled &&! Queue. isEmpty) {ThreadPool. queueUserWorkItem (ProcessItemLoop);} else {Interlocked. exchange (ref isProcessing, 0) ;}} catch (Exception ex) {OnProcessException (ex);} finally {Interlocked. decrement (ref runingCore );}}}} /// <summary> /// scheduled processing frame. The thread calls a function. /// mainly monitors the status of threads not coming and processing when they are in the queue. /// </summary> private void PorcessItem (object state) {int sleepCount = 0; int sleepTime = 1000; while (enabled) {// If If the queue is empty, the sleep time is determined based on the number of cycles if (queue. isEmpty) {if (sleepCount = 0) {sleepTime = 1000;} else if (sleepCount = 3) {sleepTime = 1000*3;} else if (sleepCount = 5) {sleepTime = 1000*5;} else if (sleepCount = 8) {sleepTime = 1000*8;} else if (sleepCount = 10) {sleepTime = 1000*10 ;} else {sleepTime = 1000*50;} sleepCount ++; Thread. sleep (sleepTime);} else {// determines whether a thread in the queue is processing if (enable D & Interlocked. CompareExchange (ref isProcessing, 1, 0) = 0) {if (! Queue. isEmpty) {ThreadPool. queueUserWorkItem (ProcessItemLoop);} else {Interlocked. exchange (ref isProcessing, 0);} sleepCount = 0; sleepTime = 1000 ;}}}} /// <summary> /// Stop the queue /// </summary> public void Stop () {this. enabled = false ;} /// <summary> /// trigger the exception handling event /// </summary> /// <param name = "ex"> exception </param> private void OnProcessException (exception ex) {var tempException = OnException; Int Erlocked. CompareExchange (ref tempException, null, null); if (tempException! = Null) {OnException (ex) ;}# endregion}

 

Iii. spin lock)

Basic Introduction:The user-state lock implemented based on atomic operations. The disadvantage is that the thread never releases the CPU time slice. It takes about 500 clock cycles for the operating system to switch from the thread user State to the kernel state. You can refer to whether the thread is waiting for the user or waiting for the switch to the kernel ..

Use Cases:When the thread waits for a short period of time for resources.

Case:This is the same as the method used by the most commonly used Monitor. In actual scenarios, the Monitor should be used first unless the thread waits for resources for a short time..

 

4. Monitor)

Basic Introduction:The locks implemented on the basis of atomic operations start to be in the user State and spin into the kernel state for a period of time waiting to release the CPU time slice. The disadvantage is that improper use may cause the deadlock. The key word implemented by c # is Lock.

Use Cases:All scenarios that require locking can be used.

Case: There are too many cases, which will not be listed here.

5. ReadWriterLock)

Principle Analysis:Locks implemented based on atomic operations,

Use Cases:This method is applicable when the number of writes is small and the read frequency is high.

Case: A thread-safe cache implementation (. net 4.0 can use ConcurrentDictionary <K, V> in the basic class library) Note: The old version of ReaderWriterLock has been eliminated, and the new version of ReaderWriterLockSlim

Class CacheManager <K, V >{# region [member] private ReaderWriterLockSlim readerWriterLockSlim; private Dictionary <K, V> containter; # endregion # region [constructor] public CacheManager () {this. readerWriterLockSlim = new ReaderWriterLockSlim (); this. containter = new Dictionary <K, V> () ;}# endregion # region [Method] public void Add (K key, V value) {readerWriterLockSlim. enterWriteLock (); try {containter. add (key, Value);} finally {readerWriterLockSlim. exitWriteLock () ;}} public V Get (K key) {bool result = false; V value; do {readerWriterLockSlim. enterReadLock (); try {result = containter. tryGetValue (key, out value);} finally {readerWriterLockSlim. exitWriteLock () ;}} while (! Result); return value ;}# endregion}

. Net also has other thread synchronization mechanisms: ManualResetEventSlim, AutoResetEvent, and SemaphoreSlim. Here we will explain them in detail in CLR Via C, however, I have not used it in actual development.

The best thread synchronization mechanism is not synchronous, which depends on a good design. Of course, in some cases, the lock cannot be avoided. When the performance requirements are not high, the basic lock can meet the requirements. However, when the performance requirements are harsh, the requirement is more practical.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.