C # uses the Monitor class, lock, and mutex classes for multithreaded synchronization

Source: Internet
Author: User

In multi-threading, in order to keep the data consistency must be the data or access to the data function lock, in the database this is very common, but in the program because most of the program is single-threaded, so there is no need to lock, but in multi-threading, in order to maintain the synchronization of data, must be locked, Fortunately, the framework has provided us with three locking mechanisms, namely the Monitor class, the lock keyword, and the mutex class.
The Lock keyword usage is relatively simple, the monitor class and the use of lock is similar. Both are locked data or locked functions that are called. Mutexes are used to lock out synchronous calls between multiple threads. To put it simply, monitor and lock are used to lock the callee, while mutexes are used to lock the call side.
For example, the following program: Because this program is the millisecond level, so running the following program may have different results on different machines, running on the same machine at different times have different results, my test environment is VS2005, WindowsXP, CPU3.0, 1 G monery.
There are two threads Thread1, Thread2, and a testfunc function in the program, TestFunc will print out the name of the thread that called it and the time of the call (mm level), and two threads call the TestFunc function at 30mm and 100mm, respectively. The TestFunc execution time is 50mm. The procedure is as follows:
Using System;
Using System.Collections.Generic;
Using System.Text;
Using System.Threading;
Namespace Monitorlockmutex
{
Class Program
{
#region variable
Thread thread1 = null;
Thread thread2 = null;
Mutex mutex = null;
#endregion
static void Main (string[] args)
{
Program P = new program ();
P.runthread ();
Console.ReadLine ();
}
Public program ()
{
Mutex = new Mutex ();
Thread1 = new Thread (new ThreadStart (Thread1func));
Thread2 = new Thread (new ThreadStart (Thread2func));
}
public void Runthread ()
{
Thread1. Start ();
Thread2. Start ();
}
private void Thread1func ()
{
for (int count = 0; count < ten; count++)
{
TestFunc ("Thread1 has run" + count. ToString () + "Times");
Thread.Sleep (30);
}
}
private void Thread2func ()
{
for (int count = 0; count < ten; count++)
{
TestFunc ("Thread2 has run" + count. ToString () + "Times");
Thread.Sleep (100);
}
}
private void TestFunc (String str)
{
Console.WriteLine ("{0} {1}", str, System.DateTime.Now.Millisecond.ToString ());
Thread.Sleep (50);
}
}
}
The results of the operation are as follows:

The

shows that if unlocked, the two threads basically read the TestFunc function at the execution time (50mm) of the respective interval +testfunc. Because the thread needs to allocate memory at the beginning, so the No. 0 call is inaccurate, and from the 1th to 9th call, it can be seen that the execution interval of THREAD1 is approximately 150mm of the execution interval of 80mm,thread2.  
now modifies TestFunc as follows:  
private void TestFunc (string str)  

Lock (this)  
{  
Console.WriteLine ("{0} {1}", str, System.DateTime.Now.Millisecond.ToString ());  
Thread.Sleep (50 );  


or the same as with Monitor, as follows:  
private void TestFunc (string str)  
{  
Monitor.Enter (this);  
Console.WriteLine ("{0} {1}", str, System.DateTime.Now.Millisecond.ToString ());  
Thread.Sleep;  
Monitor.Exit (this);  

where both enter and exit are static methods in monitor.  
Run the lock result as follows:

Let's analyze the results again, starting with the 1th time. The call interval between the same threads is the thread execution time +testfunc call time, and the call interval between different threads is TestFunc call time. For example, the interval between two successive calls to Thread1 is approximately 30+50=80, and the interval between calls to Thread2 is approximately 100+50=150mm. The time interval between calls to Thread1 and Thread2 is 50mm. Because TestFunc is locked, a thread calls TestFunc, and when other threads call TestFunc at the same time, subsequent threads are queued to wait until the thread that owns the access releases the resource.
This is the feature that locks the called function, that is, only guaranteed to be called by one thread at a time, the number of high priority calls of the thread is much, the lower is less, this is called the preemptive type.
Let's look at how the mutex class is used, and how it differs from monitor and lock.
Modify the code as follows:
private void Thread1func ()
{
for (int count = 0; count < ten; count++)
{
Mutex. WaitOne ();
TestFunc ("Thread1 has run" + count. ToString () + "Times");
Mutex. ReleaseMutex ();
}
}
private void Thread2func ()
{
for (int count = 0; count < ten; count++)
{
Mutex. WaitOne ();
TestFunc ("Thread2 has run" + count. ToString () + "Times");
Mutex. ReleaseMutex ();
}
}
private void TestFunc (String str)
{
Console.WriteLine ("{0} {1}", str, System.DateTime.Now.Millisecond.ToString ());
Thread.Sleep (50);
}
The results of the operation are as follows:

It can be seen that amutex can only be called between mutually exclusive threads, but cannot be mutually exclusive to the repeated invocation of this thread , that is, WaitOne () in Thread1 only acts as an exclusive to WaitOne () in Thread2. However, Thread1 is not affected by this wainone () and can be called multiple times, just by calling the same number of ReleaseMutex () at the end of the call.
So how do you get threads to execute sequentially in the order they are called? In fact, the lock and the mutex together to use it, change the code as follows:
private void Thread1func ()
{
for (int count = 0; count < ten; count++)
{
Lock (This)
{
Mutex. WaitOne ();
TestFunc ("Thread1 has run" + count. ToString () + "Times");
Mutex. ReleaseMutex ();
}
}
}
private void Thread2func ()
{
for (int count = 0; count < ten; count++)
{
Lock (This)
{
Mutex. WaitOne ();
TestFunc ("Thread2 has run" + count. ToString () + "Times");
Mutex. ReleaseMutex ();
}
}
}

C # uses the Monitor class, lock, and mutex classes for multithreaded synchronization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.