Best Practices for managed threading
Msdn
Multithreaded programming requires more attention when programming. For most tasks, you can reduce complexity by queuing execution requests as thread-pool threads. This topic explores more complex scenarios, such as reconciling the work of multiple threads or dealing with threads that cause blocking.
Deadlock and race conditions
Multithreaded programming addresses throughput and responsiveness issues, but introducing this feature introduces new problems: deadlocks and race conditions.
Dead lock
A deadlock occurs when each of the two threads tries to lock a resource that is already locked by another thread. None of the threads can continue to execute.
Many methods of managed threading classes provide time-out settings that can help you detect deadlocks. For example, the following code attempts to acquire a lock on the current instance. If the lock fails within 300 milliseconds, Monitor.TryEnter will return false.
if (Monitor.TryEnter (this)) { try { // Place Code protected by the Monitor here. } finally { monitor.exit (this);} } Else { // Code to execute if the attempttimes out.}
Race condition
A race condition is a Bug that occurs when the result of a program depends on which of two or more threads first arrives at a particular block of code. Running the program multiple times produces different results, and the results of any given run are unpredictable.
A simple example of a race condition is incrementing a field. Assuming that a class has a private static field ( Sharedin Visual Basic), it increments every time an instance of the class is created, and the code used is objct++; (C #) or objct + = 1 (Visual Basic). This operation requires that the value in objct be loaded into a register, which increments the value and then stores it in objct .
In a multithreaded application, a thread that has loaded and incremented the value may be preempted by another thread, the preemptive thread performs all three steps, and when the first thread resumes execution and stores its value, it overwrites objct, but does not consider the fact that the value has changed during its suspend execution.
This race condition can be easily avoided by using the Interlocked class method, such as Interlocked.Increment. To learn about other tricks for synchronizing data between multiple threads, see Synchronizing Data for multithreading.
A race condition may also occur when synchronizing the activities of multiple threads. When writing each line of code, you must consider what happens when you have the following special circumstances, where a thread first executes the code before executing the line code (or any machine instruction that makes up the line).
Number of processors
Multithreaded programming technology solves the problems of single-processor computers and multiprocessor computers, where single-processor computers are mostly used to run end-user software, and multiprocessor computers are often used as servers.
Single-processor computers
Multithreaded programming provides better responsiveness to computer users and uses idle time to process background tasks. If you are using multithreaded programming on a single-processor computer, then:
There is only one thread running at any time.
A background thread executes only when the primary user thread is idle. A continuously running foreground thread will cause the background thread to get no processor time.
When you call the Thread.Start method on a thread, this thread will not execute until the current thread ends or is preempted by the operating system.
The reason for the race condition is usually that the programmer does not anticipate the fact that a thread may be preempted at an unmanageable moment, and sometimes a second thread is used to preempt the code block.
Multi-processor computers
Multithreaded programming provides greater throughput. 10 processors can perform up to 10 times times the workload of a single processor, but only if the task is separated and 10 processors work simultaneously; The thread provides a convenient way to divide tasks and take advantage of additional processing power. If you are using multithreaded programming on a multiprocessor computer, then:
The number of threads that can execute concurrently depends on the number of processors.
A background thread executes only if the number of foreground threads being executed is less than the number of processors.
When you call the Thread.Start method on a thread, this thread may or may not execute immediately, depending on the number of processors and the number of threads currently waiting to execute.
Race conditions can occur not only because threads are accidentally preempted, but also because two of threads executing on different processors are robbing the same block of code.
Static members and static constructors
The class is not initialized until it finishes running in the class constructor of the class ( static constructor in C #, Shared Sub Newin Visual Basic). To prevent code execution against uninitialized types, the common language runtime prohibits all calls from other threads to the static members of the class ( Shared members in Visual Basic) before the class constructor finishes running.
For example, if a class constructor starts a new thread, and the thread procedure calls the static member of the class, the new thread is blocked until the class constructor finishes.
These conditions apply to any type that can have a static constructor.
General recommendations
Consider the following guidelines when using multi-threading:
Do not use Thread.Abort to terminate other threads. Calling Abort on another thread is tantamount to throwing the thread's exception and not knowing where the thread has been processed.
Do not use Thread.Suspend and Thread.Resume to synchronize the activities of multiple threads. Do not use mutexes, ManualResetEvent, AutoResetEvent, and Monitor.
Instead of controlling the execution of worker threads (such as using events) from the main program, you should have the worker thread responsible for waiting for the task, executing the task, and notifying other parts of the program when it is finished. If the worker thread does not block, consider using thread pool threads. Monitor.pulseall is useful in situations where a worker thread is blocked.
Do not use a type as a locked object. For example, avoid using lock (typeof (x) ) code in C #, or use SyncLock (GetType (x)) code in Visual Basic, or System.Threading.Monitor.Enter (System.Object) is used with the Type object. For a given type, there is only one System.Type instance per application domain. If the type of object you lock is public, code other than your code can lock it, but it can cause a deadlock. For additional information, see Reliability best practices.
Be cautious when you lock an instance, for example, Lock (this) in C # or SyncLock (Me)in Visual Basic. If other code in your application that does not belong to that type locks the object, a deadlock occurs.
Make sure that the thread that has entered the monitor always leaves the monitor, even if an exception occurs while the thread is in the monitor. C # 's lock statement and Visual Basic's SyncLock statement automatically provide this behavior, and they use a finally block to ensure that Monitor.Exit is called. If you cannot ensure that Exitis called, consider changing your design to use a Mutex. The Mutex is automatically freed after the thread that currently owns it terminates.
Be sure to use multi-threading for tasks that require different resources, and avoid assigning multiple threads to a single resource. For example, any task involving I/O will benefit from having its own thread, because this thread will block during the I/O operation, allowing other threads to execute. User input is another resource that can benefit from a dedicated thread. On a single-processor computer, tasks involving a large number of computations can coexist with user input and tasks involving I/O, but multiple computationally intensive tasks compete with each other.
For simple state changes, consider using the method of the Interlocked class instead of the lock statement ( SyncLockin Visual Basic). The lock statement is an excellent generic tool, but the interlocked class provides better performance for updates that must be atomic. If there is no contention, it executes a lock prefix internally. When viewing the code, note the code similar to the one shown in the following example. In the first example, the state variable is incremented:
Lock (lockobject) { MyField+ +;}
You can use the Increment method instead of the lock statement to improve performance as follows:
System.Threading.Interlocked.Increment (MyField);
Attention |
In the. NET Framework version 2.0, the Add method provides an atomic update with an increment greater than 1. |
In the second example, it is only updated if the reference type variable is a null reference (Nothing in Visual Basic).
if NULL ) { lock (lockobject) { ifnull) { = y;}} }
Instead, use the CompareExchange method to improve performance as follows:
System.Threading.Interlocked.CompareExchange (refnull);
Attention |
In the. NET Framework version 2.0, theCompareExchange method has a generic overload that can be used for type-safe substitution of any reference type. |
Recommendations for class libraries
When designing class libraries for multithreaded programming, consider the following guidelines:
If possible, avoid synchronizing requirements. This is especially true for heavily used code. For example, you can adjust an algorithm to tolerate race conditions, rather than completely eliminating race conditions. Unnecessary synchronization can degrade performance and can lead to deadlocks and race conditions.
static data, which is Sharedin Visual Basic, is thread-safe by default.
By default, the instance data is not thread-safe. The practice of creating thread-safe code by adding locks can degrade performance, exacerbate lock contention, and cause deadlocks to occur. In a common application model, only one thread executes user code at a time, which makes the need for thread safety to be minimized. For this reason, the. NET Framework class Library is not thread-safe by default.
Avoid providing static methods that can change the static state. In a common server scenario, the static state is shared across requests, which means that multiple threads can execute the code at the same time. This makes it possible for a threading error to occur. Consider using a design pattern to encapsulate data in instances that are not shared between requests. In addition, if you synchronize static data, calls between static methods that change state can lead to deadlocks or redundant synchronizations, which degrades performance.
Best Practices for managed threading