Turn from: http://msdn.microsoft.com/zh-cn/magazine/cc817398.aspx
This article will introduce the following: The basic concurrency concept concurrency problem and the suppression measures to implement the pattern crosscutting concept of security |
This article uses the following techniques: Multithreading,. NET Framework |
Catalog Data contention
Forget sync
Particle size Error
Read and write Torn
No lock reordering
Re-entry
Dead lock
Lock protection
Stamp
Two-step dance.
Priority reversal
The pattern of implementing security
Invariance
Purity
Isolation
Concurrency is a ubiquitous phenomenon. Server-side programs have long been responsible for dealing with the basic concurrency programming model, and with the increasing popularity of multi-core processors, client programs will need to perform some tasks. As concurrent operations continue to grow, questions about ensuring security are emerging. That is, the program must continue to maintain the same level of stability and reliability in the face of a large number of logical concurrent operations and changing physical hardware parallelism. The correct design of concurrent code must also follow some additional rules than the corresponding sequential code. The reading and writing of memory and access to shared resources must be controlled using a synchronization mechanism to prevent conflicts. In addition, it is often necessary to coordinate threads to work together to accomplish a task. The immediate result of these additional requirements is the fundamental assurance that the thread is always consistent and that it can be pushed forward smoothly. Synchronization and coordination are highly dependent on time, which leads to uncertainty and difficulty in forecasting and testing. These attributes make people feel somewhat difficult, just because people's thinking has not changed. There are no specialized APIs to learn, and no snippets to copy and paste. There really is a set of basic concepts that you need to learn and adapt to. It is likely that some languages and libraries will hide some concepts over time, but this is not the case if you start performing concurrent operations now. This article describes some of the more common challenges that need to be noted and gives some advice on how you can use them in your software. First, I'll discuss a problem that often goes wrong in concurrent programs. I refer to them as "security risks" because they are easy to spot and the consequences are usually more serious. These risks can cause your program to break down due to crashes or memory problems.
Data contention (or competition conditions) occurs when data is accessed concurrently from multiple threads. In particular, this occurs when one or more threads are writing a piece of data, and if one or more threads are reading the data. This problem occurs because Windows programs, such as C + + and Microsoft. NET Framework, are largely based on shared memory concepts, and all threads in the process have access to data residing in the same virtual address space. Static variables and heap allocations can be used for sharing. Consider the following typical example:
Static class Counter {
internal static int s_curr = 0;
internal static int GetNext () {return
s_curr++;
}
}
The goal of Counter may be to distribute a new unique number for each invocation of GetNext. However, if two threads in the program call GetNext at the same time, the two threads may be assigned the same number. The reason is that s_curr++ compilation consists of three separate steps: to read the current value from the shared S_curr variable into the processor register. Increments the register. Writes the register value back to the shared S_curr variable. The two threads that are executed in this order may read the same value (such as 42) locally from S_curr and increment it to a value (such as 43), and then publish the same result value. As a result, GetNext will return the same number for these two threads, causing the algorithm to break. Although simple statements s_curr++ seemingly indivisible, they are not.
Forgetting to sync this is the simplest data contention scenario: Synchronization is completely forgotten. This contention rarely has a benign situation, which means that although they are correct, most of them are due to the problem of the foundation of this correctness. This kind of problem is usually not very obvious. For example, an object might be part of a diagram of a large, complex object that can be accessed using static variables, or it can become a shared diagram by passing an object as part of a closure when creating a new thread or scheduling the work into a thread pool. Be careful when objects (charts) are changed from private to shared. This is known as a publication and is discussed in the context of the isolation below. Conversely, it is called privatization, where objects (graphs) are again changed from shared to private. The solution to this problem is to add the correct synchronization. In the counter example, I can use a simple interlock:
Static class Counter {
internal static volatile int s_curr = 0;
internal static int GetNext () {return
interlocked.increment (ref S_curr);
}
It works because the update is limited to a single memory location, and because (this is very handy) there is a hardware directive (LOCK INC), which is equivalent to the software statement I'm trying to do the atomization operation. Or, I can use a mature lock:
Static class Counter {
internal static int s_curr = 0;
private static Object S_currlock = new Object ();
internal static int GetNext () {
Lock (s_currlock) {return
s_curr++;
}
}
}
The lock statement ensures that all threads attempting to access GetNext are mutually exclusive, and that it uses the CLR System.Threading.Monitor class. C + + programs use critical_section to achieve the same purpose. Although locking is not necessary for this particular example, it is almost impossible to incorporate it into a single interlock operation when multiple operations are involved.
The resulting behavior may still be incorrect even if the granularity error accesses the shared state using the correct synchronization. The granularity must be large enough to encapsulate operations that must be treated as atoms in this area. This results in a conflict between correctness and narrowing of the area because narrowing the area reduces the time that other threads are waiting for synchronization to enter. For example, let's take a look at the bank account abstraction shown in Figure 1. Everything is normal, and two methods of objects (deposit and withdraw) do not appear to have concurrent errors. Some banking applications may use them and do not worry that the balance will be corrupted because of concurrent access. Figure 1 Bank Account
Class BankAccount {
private decimal m_balance = 0.0M;
Private Object M_balancelock = new Object ();
internal void deposit (decimal delta) {
Lock (m_balancelock) {m_balance + = Delta;}
}
internal void withdraw (decimal delta) {
Lock (m_balancelock) {
if (M_balance < delta)
throw new Exception ( "Insufficient funds");
M_balance-= Delta;
}}
However, what to do if you want to add a Transfer method. A naïve (and incorrect) idea would suggest that because deposit and withdraw are safely isolated, they can easily be merged:
Class BankAccount {
internal static void Transfer (
BankAccount A, BankAccount B, decimal delta) {
Withdraw (a, Delta);
Deposit (b, delta);
}
As before
}
This is not true. In fact, the money will be completely lost for some time between the execution of the withdraw and the deposit call. The right thing to do is to lock A and B in advance and then execute the method call:
Class BankAccount {
internal static void Transfer (
BankAccount A, BankAccount B, decimal delta) {
Lock (a.m_ba Lancelock) {
Lock (b.m_balancelock) {
Withdraw (a, delta);
Deposit (b, delta);
}
}
As before
}
It turns out that this method can solve the problem of granularity, but it is easy to deadlock. Later on, you'll learn how to fix it.
Read and write Rip as mentioned previously, benign contention allows you to access variables without synchronization. For those aligned, naturally segmented words-for example, content that is split by the pointer is 32 bits (4 bytes) in a 32-bit processor, and 64 bits in 64-bit processors (8 bytes)-The read-write operation is atomic. If a thread reads only a single variable to be written by another thread and does not involve any complex variants, you can, in some cases, be able to skip synchronization with this assurance. But be careful. If you try to do this in an misaligned memory location or where you do not use a natural split size, you may experience a read-write tearing phenomenon. Tearing occurs because the reading or writing of such locations actually involves multiple physical memory operations. Parallel updates may occur between them, which in turn can result in some form of combination of previous and subsequent values. For example, assuming that Threada is in a loop, you now need to write only 0x0l and 0xaaaabbbbccccddddL to the 64-bit variable s_x. Threadb reads it in a loop (see Figure 2).