Introduction to multithreading and parallel computing under. Net (4) thread synchronization

Source: Internet
Author: User

Let's review the last time we discussed lock/autoresetevent/manualresetevent and semaphore. These structures used for thread synchronization are called synchronization elements. Synchronization elements can be divided into three types: Lock, notification, and interlocking. The lock is obviously locked and exclusive, that is, it cannot be obtained by other threads before the lock is released. Semaphore is also a lock, but it is not an exclusive lock. You can specify the number of threads to access.CodeBlock. Autoresetevent and manualresetevent are of course notification methods. The former is automatically reset after traffic passes, and the latter needs to be reset manually. We also see that even if the synchronization mechanism is used, it may not be able to ensure that the thread is executed as planned, because basically, we cannot predict the thread scheduling of the operating system, unless we use blocking or lock wait, it is difficult to predict which of the two irrelevant threads will be executed first (even if the priority is set ), we also need to consider performance issues when using these synchronization mechanisms.ProgramOtherwise, the execution efficiency may be lower than that of a single thread. For example, we have enabled multiple threads to block each other and wait without any parallel operations, for example, in a multi-threaded environment, the scope of the lock is large, which leads to a multi-threaded environment change to a single-threaded environment. We will discuss the performance issues later. This time we will look at some other synchronization elements.

The example in this article is based on some basic static objects defined above:

Static intResult = 0;Static objectLocker =New Object();StaticEventwaithandleAre =NewAutoresetevent(False);StaticEventwaithandleMre =NewManualresetevent(False);

Using lock to protect shared resources from being modified by multiple threads at the same time is a common practice. In fact, the lock is essentially based on monitor, and using monitor itself can bring more features, for example, you can set not to wait until a certain period of time is exceeded:

For(IntI = 0; I <10; I ++ ){NewThread() => {If(Monitor. Tryenter (locket, 2000 )){Thread. Sleep (1000 );Console. Writeline (Datetime. Now. tostring ("Mm: SS"));Monitor. Exit (locker) ;}}). Start ();}

In this Code, we open 10 threads and try to apply for the locker exclusive lock. The output shows that the program outputs only three times because we set 2 seconds of Timeout:

 

After the first thread acquires the lock, it is released after one second, the second thread gets the lock, and the second thread releases the lock. The third thread gets the lock, and the second thread waits for more than two seconds, tryenter returns false and the thread ends.

In addition to tryenter, monitor also has a set of useful methods, including wait and pulse (pulseall ). In general, our threads occupy the exclusive lock and then access some thread-Safe Resources and exit the lock. With wait, we can block the current thread and temporarily release the lock until the status of the (one or more) thread locks blocked by other thread notifications has changed:

 For ( Int I = 0; I <2; I ++ ){ Thread Reader = New  Thread () => { Console . Writeline ( String . Format ( "Reader # {0} started" , Thread . Currentthread. managedthreadid )); While (True ){ Lock (Locker ){ If (Data. Count = 0 ){ Console . Writeline ( String . Format ( "# {0} can not get result, wait" , Thread . Currentthread. managedthreadid )); Monitor . Wait (locker ); Console . Writeline ( String . Format ("# {0} get result: {1 }" , Thread . Currentthread. managedthreadid, Data. dequeue () ;}}}); reader. Start ();} Thread Writer = New  Thread () => { Console . Writeline ( String . Format ( "Writer # {0} started" , Thread . Currentthread. managedthreadid )); While ( True ){ Lock (Locker ){ Int S = Datetime . Now. Second; Console . Writeline ( String . Format ( "# {0} set result: {1 }" , Thread . Currentthread. managedthreadid, S); Data. enqueue (s ); Console . Writeline ( "Y thread" ); Monitor . Pulse (locker );}Thread . Sleep (1000) ;}}); writer. Start ();
 
Data is defined as follows:
 
StaticQueue<Int> DATA =NewQueue<Int> ();

The output result is as follows:

Here, we simulate two reading threads and one writing thread. The writing thread writes the current second to the queue every second, and the reading thread keeps reading a value from the queue. When the read thread judges that the queue has no value, it will let out the exclusive lock and block the current thread. Then the write thread gets the exclusive lock write value and sends a notification, the first read thread in the queue is restored. Because only one thread can be notified when pulse () is used, the two read threads can read values from the queue one time in turn.

As mentioned at the beginning of this article, the synchronization primitive also has a structure called interlocking (mutual lock), which can perform atomic operations on variables shared by threads with high performance, it has higher performance and conciseness than lock:

 Stopwatch Sw = Stopwatch . Startnew ();Thread T1 = New  Thread () => { For ( Int J = 0; j <500; j ++ ){ Interlocked . Increment ( Ref Result ); Thread . Sleep (10 );}}); Thread T2 = New  Thread () => { For ( Int J = 0; j <500; j ++ ){ Interlocked . Add ( Ref Result, 9 ); Thread . Sleep (10) ;}}); t1.start (); t2.start (); t1.join (); t2.join (); Console . Writeline (SW. elapsedmilliseconds ); Console . Writeline (result );

The running result is as follows:

The first thread performs 500 accumulate operations, and the second thread performs 500 addition operations, so that the final result value is 10*500 = 5000, the total time consumed is more than 5 seconds.

We introduced autoresetevent and manualresetevent last time. waithandle provides two static methods: waitall and waitany. This allows us to wait for multiple eventwaithandle to complete or wait for any of them to complete:

 Stopwatch Sw = Stopwatch . Startnew ();Manualresetevent [] Wh = New  Manualresetevent [10]; For ( Int I = 0; I <10; I ++) {wh [I] = New  Manualresetevent ( False ); New  Thread (J) => { Int D = (( Int ) J + 1) x 100; Thread . Sleep (d ); Interlocked . Exchange ( Ref Result, d); wh [( Int ) J]. Set () ;}). Start (I );} Waithandle . Waitany (Wh ); Console . Writeline (SW. elapsedmilliseconds ); Console . Writeline (result );

The program output is as follows:

Here we use 10 manualresetevents to associate with 10 threads. The execution time of these threads is 100 ms/200 ms to 1000 ms, respectively, because the main thread only waits for one of the semaphores to be sent, the result is output after 100 milliseconds (but note that the program is completed after 1 second, because these threads are foreground threads by default ). If we change waitany to waitall, the result is as follows:

In the previous article, we used manualresetevent to implement a signal for multiple people to respond. This time we implemented multiple signals for single users to respond.

In the multi-threaded environment of actual applications, we usually have a lot of threads to read the value in the cache, but there will only be 1/10 or even 1/10000 of threads to modify the cache, in this case, if we use lock to lock the cache object regardless of whether the cache is read or write, it is equivalent to changing the cache to a thread when using the multi-threaded environment, and doing an experiment.

(Suppose we have defined static list <int> List = new list <int> ();):

 Stopwatch Sw = Stopwatch . Startnew (); Manualresetevent [] Wh = New  Manualresetevent [30]; For ( Int I = 1; I <= 20; I ++) {wh [I-1] = New  Manualresetevent ( False ); New  Thread (J) => { Lock (Locker ){ VaR Sum = List. count; Thread . Sleep (100); wh [( Int ) J]. Set () ;}}). Start (I-1 );} For ( Int I = 21; I <= 30; I ++) {wh [I-1] = New  Manualresetevent ( False );New  Thread (J) => { Lock (Locker) {list. Add (1 ); Thread . Sleep (100); wh [( Int ) J]. Set () ;}}). Start (I-1 );} Waithandle . Waitall (Wh ); Console . Writeline (SW. elapsedmilliseconds ); Console . Writeline (list. Count );

The output result is as follows:

We have 30 threads at the same time, 20 of which read 10 writes. The main thread waits for the output time after all of them are executed. It can be found that the 30 threads took 3 seconds, it can be understood that the write thread uses an exclusive lock for 10 seconds, but the read thread does not have any concurrency .. NET provides the ready-made read/write lock readerwriterlockslim type to enable concurrent read operations.

(Suppose we have defined static readerwriterlockslim RW = new readerwriterlockslim ();):

 Stopwatch Sw = Stopwatch . Startnew (); Manualresetevent [] Wh = New  Manualresetevent [30]; For ( Int I = 1; I <= 20; I ++) {wh [I-1] = New  Manualresetevent ( False ); New  Thread (J) => {RW. enterreadlock (); VaR Sum = List. count; Thread . Sleep (100); wh [( Int ) J]. Set (); RW. exitreadlock () ;}). Start (I-1 );} For ( Int I = 21; I <= 30; I ++) {wh [I-1] = New  Manualresetevent ( False ); New  Thread (J) => {RW. enterwritelock (); list. Add (1 );Thread . Sleep (100); wh [( Int ) J]. Set (); RW. exitwritelock () ;}). Start (I-1 );} Waithandle . Waitall (Wh ); Console . Writeline (SW. elapsedmilliseconds ); Console . Writeline (list. Count );

The output result is as perfect as we thought:

The read thread does not wait too much for concurrent access.

Finally, we will introduce a more convenient method for implementing thread synchronization:

[Methodimpl(Methodimploptions. Synchronized)]Static voidM (){Thread. Sleep (1000 );Console. Writeline (Datetime. Now. tostring ("Mm: SS"));}[Methodimpl(Methodimploptions. Synchronized)]VoidN (){Thread. Sleep (1000 );Console. Writeline (Datetime. Now. tostring ("Mm: SS"));}

The m method is a static method, and the N method is an instance method. We have applied the methodimpl feature for them and indicated that they are synchronous methods (only one thread can simultaneously access them ). Then we write the following code for verification:

For(IntI = 0; I <10; I ++ ){NewThread() => {M () ;}). Start ();}

The program output result is as follows:

It can be found that although 10 threads access the M method at the same time, only one thread can be executed at a time.

Test the n method again:

 
ProgramP =NewProgram();For(IntI = 0; I <10; I ++ ){NewThread() => {P. N () ;}). Start ();}

The program output result is the same as the previous one:

However, note that, in essence, marking synchronization for M static methods locks classes, while marking synchronization for N instance methods locks instances of classes, that is to say, if we use a new instance to call n every time, we cannot implement synchronization:

For(IntI = 0; I <10; I ++ ){NewThread() => {NewProgram(). N () ;}). Start ();}

The result is as follows:

Through these two articlesArticleWe have basically introduced some methods and structures for thread synchronization, and we can find that there are many methods, however, it is not easy to let various threads execute and summarize data at the same time, and then notify other threads to continue computing and then summarize data. Otherwise, the multi-thread may become corrupted, but the thread may also make the data dirty. In fact, some new features in. Net 4.0 simplify these behaviors and even write multi-threaded programs do not need to know these synchronization elements, but these basics are good to understand.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.