Multithreading states in C #

Source: Internet
Author: User
Tags bitwise semaphore

Figure 1: Thread state diagram

You can get the execution state of the thread through the ThreadState property. Figure 1 lists the ThreadState as "layers." ThreadState is designed to be scary, it combines three state "layers" in a bitwise calculation, each member of the state layer is mutually exclusive, and below are all three state "layers":

Run (running)/block (blocking)/terminate (aborting) status (shown in Figure 1)
Background (background)/foreground (foreground) status (Threadstate.background)
The Suspend methods (threadstate.suspendrequested and threadstate.suspended) that are not recommended for use are pending procedures
Overall, ThreadState is a bitwise combination of 0 or members of each state layer! A simple example of threadstate:

Unstarted
Running
WaitSleepJoin
Background, unstarted
suspendrequested, Background, WaitSleepJoin
(The enumeration has two members that have never been used, at least the current CLR implementations: Stoprequested and aborted.) )

There is also a more complex, threadstate.running potential value of 0, so the following tests do not work:

if ((T.threadstate & threadstate.running) > 0) ...
You must replace it with a bitwise and non-operator, or use the IsAlive property of the thread. But IsAlive may not be what you want, it returns True when it is blocked or suspended (it is true only if the thread has not started or has ended).

Assuming you avoid the deprecated suspend and resume methods, you can write a helper method that removes all members except the first state layer, allowing the simple test calculation to complete. The background state of a thread can be obtained independently through IsBackground, so only the first state layer actually has useful information.

public static ThreadState simplethreadstate (ThreadState ts)
{
Return TS & (threadstate.aborted | threadstate.abortrequested |
threadstate.stopped | threadstate.unstarted |
Threadstate.waitsleepjoin);
}
ThreadState to debug or program profiling is priceless, and it is not commensurate with the multi-threaded collaboration, because there is no mechanism exists: by judging the threadstate to execute information, regardless of threadstate period changes.

Wait handle
The lock statement (also known as Monitor.enter/monitor.exit) is an example of a thread synchronization structure. When lock implements exclusive access to a piece of code or resource, some synchronization tasks are clumsy or difficult to implement, such as transmitting signals to a waiting worker to start a task.

The Win32 API has a rich synchronization system, which is exposed in the. NET framework with EventWaitHandle, mutexes, and semaphore classes. And some are more useful than others: for example, mutex classes, when EventWaitHandle provides a unique signal function, mostly multiplies the efficiency of lock.

These three classes all depend on the WaitHandle class, although functionally they are quite different. But what they do have one thing in common is that they are "named", which allows them to bypass the operating system process, rather than just bypassing the thread in the current process.

EventWaitHandle has two subclasses: AutoResetEvent and ManualResetEvent (not related to events or delegates in C #). All two of these classes derive from their base classes: The only difference is that they call the constructor of the base class with different parameters.

In terms of performance, the use of the wait handles system overhead will be spent in smaller microseconds and will not have consequences in the context in which they are used.

AutoResetEvent is the most useful class in WaitHandle, which together with the lock statement is a major synchronization structure.

AutoResetEvent
AutoResetEvent is like a revolving door with a ticket: Insert a ticket for the right person to pass. The "Auto" in the class name is actually a revolving door that automatically closes or "re-arranges" the person who later lets it pass. A thread waits or blocks by calling the WaitOne method on the door (until the "one", the door is opened), and the ticket is inserted by calling the Set method. If WaitOne is called by many threads, a queue is formed at the door, and a ticket may come from any thread-in other words, any (non-blocking) thread that calls the set method through the AutoResetEvent object to release a blocked thread.

If no thread is waiting when set is called, the handle remains open until a thread calls WaitOne. This behavior avoids the thread getting up to rotate the door and insert the ticket (oh, the insertion ticket is very short microsecond between the things, really unlucky, you will have to wait indefinitely!)! ) competition between the two. But repeatedly calling the set method on the door when no one is waiting is not allowed in a team of people to pass, when they arrive: only the next person can pass, the extra tickets are "wasted".

WaitOne accepts an optional timeout parameter--when waiting at the end of the time-out, this method returns False,waitone while waiting for the entire period of time to be notified of leaving the current synchronization content, in order to avoid excessive blocking.

The Reset method provides the option to close the revolving door without any waiting or blocking-it should be open.

AutoResetEvent can be created in 2 ways, the first of which is through constructors:

EventWaitHandle WH = new AutoResetEvent (false);
If the Boolean argument is True, the set method is automatically called immediately after construction, and the other method is eventwaithandle by its base class:

EventWaitHandle WH = new EventWaitHandle (false, Eventresetmode.auto);
The EventWaitHandle constructor also allows the creation of ManualResetEvent (defined with eventresetmode.manual).

When wait handle is not needed, you should call the Close method to release the operating system resources. However, if a wait handle will be used in the life cycle of the program (as in most examples of this section), you may be lazy to omit this step, which will automatically be destroyed when the program domain is destroyed.

In the next example, one thread begins to wait until another thread sends a signal.

Class Basicwaithandle {
  static eventwaithandle wh = new AutoResetEvent (false);
 
  static void Main () {
    new Thread (waiter). Start ();
    Thread.Sleep (;           )       //Wait a minute ...
    wh. Set ();                             //ok--Wake it
 }
  static void Waiter () {
    Console.WriteLine ("Waiting ...");
    wh. WaitOne ();                        //wait for notification
    Console.WriteLine ("Notified");
 }
}
Waiting ... (pause) Notified.

Create a eventwaithandle across processes
The EventWaitHandle constructor allows creation in a "named" manner, and it has the ability to span multiple processes. The name is a simple string that may unintentionally conflict with another! If the name is used, you will reference the same potential eventwaithandle unless the operating system creates a new one, see this example:

EventWaitHandle WH = new EventWaitHandle (False, Eventresetmode.auto,
"MyCompany.MyApp.SomeName");
If there are two programs that run this code, they will be able to send a signal to each other and wait for the handle to cross all the threads in both processes.

Task Confirmation
Imagine that we want to complete the task in the background and not create a new thread each time we get the task. We can do this through a polling thread: Wait for a task, execute it, and wait for the next task. This is a common multithreaded scenario. That is, slicing the housekeeping operations on the creation thread, the task execution is serialized, and the potential unwanted actions are excluded between multiple worker threads and excessive resource consumption.

We have to decide what to do, but if the work thread is already busy before the new task arrives, imagine that in this case we will have to choose to block the caller until the previous task is completed. A system like this can be implemented with two AutoResetEvent objects: a "Ready" AutoResetEvent, which is called by the worker thread when it is prepared, and "go" AutoResetEvent, when there is a new task, It is called by the calling thread to call the Set method. In the following example, a simple string field is used to determine the task (using the volatile keyword declaration to ensure that two threads can see the same version):

Class Acknowledgedwaithandle {
static EventWaitHandle ready = new AutoResetEvent (false);
static EventWaitHandle go = new AutoResetEvent (false);
static volatile string task;

static void Main () {
New Thread (work). Start ();

Send a 5-time signal to the worker thread
for (int i = 1; I <= 5; i++) {
Ready.                WaitOne (); First wait until the worker thread is ready.
Task = "a".   PadRight (i, ' H '); Assign a value to a task
Go.                       Set (); Tell the worker to start executing!
}

Tells the worker to end with a null task
Ready. WaitOne (); task = null; Go. Set ();
}

static void work () {
while (true) {
Ready.                          Set (); Indicates that we are ready.
Go.                         WaitOne (); Waiting to be kicked off ...
if (task = = null) return; Gracefully exit
Console.WriteLine (Task);
}
}
}
Ah
Ahh
Ahhh
Ahhhh

Note that we are assigning null to the task to tell the worker to exit. Calling interrupt or abort on a worker thread is the same, if we call ready first. WaitOne words. Because you are calling ready. After WaitOne we know the exact location of the worker thread, not just in go. WaitOne statements, thus avoiding the complexity of interrupting arbitrary code. Calling Interrupt or abort requires us to catch an exception in the worker thread.

Producer/Consumer Queue
Another common threading scenario is to assign tasks from the queue in a background worker process. This is called producer/consumer queue: Producer into row task in worker thread, consumer dequeue task. This is similar to the previous example, except that the caller was not blocked while the worker was busy with a task.

Producer/consumer queues are scalable because multiple consumers may be created-each serving the same queue, but opening a separate thread. This is a good way to use a multiprocessor system to limit the number of worker threads that have been avoiding the pitfalls of great concurrent threading (excessive content switching and resource connectivity).

In the example below, a separate AutoResetEvent is used to notify the worker that it waits only when the task is exhausted (the queue is empty). A generic collection class is used for queues, and it must be controlled through a lock to ensure thread safety. The worker thread ends when the queue is a null task:

Using System;
Using System.Threading;
Using System.Collections.Generic;

Class Producerconsumerqueue:idisposable {
EventWaitHandle WH = new AutoResetEvent (false);
Thread worker;
Object locker = new Object ();
queue<string> tasks = new queue<string> ();

Public Producerconsumerqueue () {
Worker = new Thread (work);
Worker. Start ();
}

public void Enqueuetask (string task) {
Lock (Locker) tasks. Enqueue (Task);
Wh. Set ();
}

public void Dispose () {
Enqueuetask (NULL); Tell the consumer to quit
Worker.          Join (); Waiting for the consumer thread to finish
Wh.             Close (); Release any OS resources
}

void work () {
while (true) {
string task = null;
Lock (Locker)
if (tasks. Count > 0) {
task = tasks. Dequeue ();
if (task = = null) return;
}
if (task! = null) {
Console.WriteLine ("Performing Task:" + Task);
Thread.Sleep (1000); Simulation work ...
}
Else
Wh.         WaitOne (); No mission--wait for the signal.
}
}
}
Here is a main method to test this queue:

Class Test {
static void Main () {
using (producerconsumerqueue q = new Producerconsumerqueue ()) {
Q.enqueuetask ("Hello");
for (int i = 0; i < i++) Q.enqueuetask ("Say" + i);
Q.enqueuetask ("goodbye!");
}
Call the Dispose method of Q using the using statement,
it into row a null task and waits for the consumer to complete
}
}
Performing Task:hello
Performing Task:say 1
Performing Task:say 2
Performing Task:say 3
...
...
Performing Task:say 9
goodbye!

Note that we explicitly closed the wait handle when Producerconsumerqueue was destroyed because we could potentially create and destroy many instances of this class during the program's life cycle.

ManualResetEvent
ManualResetEvent is a form of autoresetevent change, which differs in that when a thread is passed by a WaitOne call, it does not automatically reset, the process is like a gate-call set to open the door, Allow any number of executed WaitOne threads to pass; Calling reset closes the gate and may cause a series of "waiting" until the next door opens.

You can simulate this process by using a Boolean field called "Gateopen" (declared with the volatile keyword) and "spin-sleeping" – a way to check the flag repeatedly, and then let the thread hibernate for a period of time.

ManualResetEvent is sometimes used to send a signal to a completed operation, or a thread that has been initialized to perform work.

Mutex (mutex)
The mutex provides the same functionality as the C # lock statement, which makes it redundant most of the time. Its advantage is that it can work across processes-providing a computer-wide lock rather than a program-wide lock.

The mutex is fairly fast, and lock is hundreds of times times faster than it is, obtaining a mutex takes a few microseconds, and acquiring lock takes dozens of nanoseconds (assuming no blocking).

For a mutex class, WaitOne acquires a mutex, which is blocked when it has been preempted. The mutex is freed after executing the ReleaseMutex, just like the lock statement in C #, the mutex can only be freed from the thread that acquires the mutex.

The common use of mutexes in cross-process is to ensure that only one instance of a program is running at the same time, and the following shows how to use:

Class Oneatatimeplease {
 //Use a unique name for an application (such as a URL that includes your company)
  static Mutex mutex = new Mutex (false, "ore Illy.com Oneatatimedemo ");
  
  static void Main () {
   //wait 5 seconds if there is a competition-there is a program in
   // Another instance of the process is closed after
 
    if (!mutex. WaitOne (Timespan.fromseconds (5), false)) {
      Console.WriteLine ("Another instance of The app is running. Bye! ");
      return;
   }
    try {
      Console.WriteLine ("Running-press Enter to Exit");
      Console.ReadLine ();
   }
    finally {mutex. ReleaseMutex (); The
 }
}
Mutex has a good feature that the CLR will automatically release the mutex if the program ends and the mutex is not first freed through ReleaseMutex.

Semaphore
Semaphore is like a nightclub: it has a fixed capacity, which is guaranteed by the bodyguard, and once it is full, no one can enter the nightclub again, and it will form a queue. Then, when a person leaves, the head of the queue can enter. The constructor requires at least two parameters--the activity space of the nightclub, and the capacity of the nightclub.

Semaphore's features are somewhat similar to mutexes and lock, except that Semaphore has no "owner"-it is an agnostic thread, and any thread within the Semaphore can call release, while the mutex and Lock only those threads that have acquired the resource can release it.

In the following example, 10 threads execute a loop, using the sleep statement in the middle. Semaphore ensures that only 3 threads can execute a sleep statement at a time:

Class Semaphoretest {
Static Semaphore s = new Semaphore (3, 3); available=3; Capacity=3

static void Main () {
for (int i = 0; i <; i++) new Thread (Go). Start ();
}

static void Go () {
while (true) {
S.waitone ();
Thread.Sleep (100); Only 3 threads can reach here at a time.
S.release ();
}
}
}
WaitAny, WaitAll and signalandwait
In addition to the set and WaitOne methods, there are static methods in class WaitHandle that are used to create complex synchronization procedures.

WaitAny, WaitAll, and signalandwait make it easy to span multiple wait handles that can be of different types.

SignalAndWait is probably the most useful: he WaitHandle on one of the WaitOne and calls set automatically on another WaitHandle. You can assemble two threads on a pair of eventwaithandle and let them "meet" at a certain point in time. This technique is not available for AutoResetEvent or ManualResetEvent. The first thread looks like this:

Waithandle.signalandwait (WH1, WH2);
At the same time the second thread does the opposite thing:

Waithandle.signalandwait (WH2, WH1);
WaitHandle.WaitAny waits for any one set of wait handles to signal, WaitHandle.WaitAll waits for all given handles to signal. Similar to the example of a bill revolving door, these methods may wait for all revolving doors at the same time-either by the first opening (WaitAny case), or by waiting until all of them are open (WaitAll).

WaitAll is actually an indeterminate value because it has a strange connection to the cell-mode thread-a problem left over from the COM system. WAITALL requires the caller to be a multithreaded unit--which happens to be the best fit for a cell pattern--especially in a Windows forms program that needs to perform tasks as vulgar as a clipboard!

Fortunately, the. NET Framework provides a more advanced signal structure--monitor.wait and Monitor.pulse when waiting for a handle to be difficult to use or inappropriate.

Synchronization environment
Compared to manual locking, you can perform descriptive locking, with classes derived from ContextBoundObject and labeled with the synchronization attribute, which tells the CLR to automate the lock operation, see this example:

Using System;
Using System.Threading;
Using System.Runtime.Remoting.Contexts;

[Synchronization]
public class Autolock:contextboundobject {
public void Demo () {
Console.Write ("Start ...");
Thread.Sleep (1000); We can't seize this.
Console.WriteLine ("End"); Thanks for the auto lock!
}
}

public class Test {
public static void Main () {
Autolock safeinstance = new Autolock ();
New Thread (Safeinstance.demo).     Start (); In parallel
New Thread (Safeinstance.demo).     Start (); Call Demo
Safeinstance.demo (); Method 3 times
}
}
Start ... end
Start ... end
Start ... end

The CLR ensures that only one thread can execute code in safeinstance at the same time. It creates a synchronization object to complete the work and can only row locks around it each time the method and properties of the safeinstance are called. The scope of the lock--here is the Safeinstance object, called the synchronization environment.

So, how does it work? Synchronization namespace of the attribute: System.Runtime.Remoting.Contexts is a clue. ContextBoundObject can be thought of as a "remote" object, which means that all method calls are being monitored. Let this listener be called a possibility, just like our example AUTOLOCK,CLR automatically returns a proxy object with the same method and properties as the Autolock object, which acts as a middle player. In general, listening increases the number of microseconds in each method invocation.

Automatic synchronization cannot be used for members of static types, and classes that are not inherited from ContextBoundObject (for example, Windows Form).

The lock works in the same way internally, and you may expect the following example to have the same result as before:

[Synchronization]
public class Autolock:contextboundobject {
public void Demo () {
Console.Write ("Start ...");
Thread.Sleep (1000);
Console.WriteLine ("End");
}

public void Test () {
New Thread (Demo). Start ();
New Thread (Demo). Start ();
New Thread (Demo). Start ();
Console.ReadLine ();
}

public static void Main () {
New Autolock (). Test ();
}
}
(Note that we put in the Console.ReadLine statement.) Because only one of the same objects in the same class can execute code at the same time, three new threads will remain blocked in the demo, until the test method finishes, waiting for ReadLine to complete. So we end up with the same results as before, but only after we've finished pressing the ENTER key. This is a thread-safe means, almost enough to exclude any useful multithreading in the Class!

In addition, we still do not address a previously described problem: If Autolock is a collection class, for example, we still need a lock like the following, assuming that it is running in another class:

if (Safeinstance.count > 0) safeinstance.removeat (0);
Unless the class using this code is itself a synchronous contextboundobject!

The synchronization environment can be extended to areas that exceed a single object. By default, if a synchronization object is instantiated from within another piece of code, they share the same synchronization environment (in other words, a large lock!). )。 This behavior can be specified by the parameters of the constructor that alters the synchronization attribute. Use one of the constants defined by the SynchronizationAttribute class:

Constant
Meaning
not_supported
Equivalent to not using the synchronization feature
Supported
If it is instantiated from another synchronization object, it merges the existing synchronization environment, otherwise only the non-synchronization is left.
REQUIRED
Default
If you are instantiating from another synchronization object, merge the existing synchronization environment, or create a new synchronization environment.
Requires_new
Always create a new sync environment
So if Synchronizeda instances are instantiated in Synchronizedb objects, they will have a separate synchronization environment if the Synchronizedb is declared as follows:

[Synchronization (Synchronizationattribute.requires_new)]
public class Synchronizedb:contextboundobject {...
The larger the synchronization environment, the easier it is to manage, but reduce the chance to useful concurrency. For a limited angle, a separate synchronization environment can cause deadlocks, see this example:

[Synchronization]
public class Deadlock:contextboundobject {
public DeadLock Other;
public void Demo () {thread.sleep (1000); Other.hello (); }
void Hello () {Console.WriteLine ("Hello"); }
}

public class Test {
static void Main () {
Deadlock dead1 = new Deadlock ();
Deadlock dead2 = new Deadlock ();
Dead1. other = Dead2;
Dead2. other = Dead1;
New Thread (Dead1. Demo). Start ();
Dead2. Demo ();
}
}
Because each instance of deadlock is created within test-a non-synchronous class, each instance will have its own synchronization environment, so it has its own lock. When they call each other, it doesn't take too much time to deadlock (say, exactly one second!). )。 If deadlock and test are written by different development teams, this problem is particularly likely to occur. Don't expect Test to know how to make a mistake, let's not expect them to solve it. In the case of deadlocks, this contrasts sharply with the use of clear locks.

Re-entry issues
Thread-safe methods are sometimes referred to as reentrant, because they can be preempted at the time of execution, and will not have a bad effect on other thread invocations. In a sense, the term thread-safe and reentrant are synonymous or justified.

However, in the case of automatic locking, if the synchronization parameter can be re-entered as true, there may be a potential problem with reentrant sex:

[Synchronization (True)]
The lock on the synchronization environment is temporarily freed when execution leaves the context. In the previous example, this would prevent the deadlock from happening, which is clearly needed. A side effect, however, is that during this time, any thread can freely invoke any method on the target object (the "Re-entry" synchronization context), while the very complex multi-threaded attempt to avoid releasing resources is in the first place. This is a matter of reentrant nature.

Because [Synchronization (true)] works at the class level, this feature opens access to non-contextual methods because of the reentrant nature of the call to the class.

Although reentrant is dangerous, sometimes it is a good choice. For example, imagine a class that implements multi-threaded synchronization within it, running logical worker threads in different contexts. In the absence of reentrant problems, worker threads can be unreasonably obstructed between them or between target objects.

This highlights one of the fundamental weaknesses of automatic synchronization: exceeding the applicable wide range of locks poses a huge hassle for other situations. These difficulties: deadlocks, reentrant problems and castration concurrency, making another simpler scenario-manual locking more appropriate.

Multithreading states in C #

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.