Document directory
- Intention
- Motivation
- Adaptability
- Structure and participants
- Collaboration
- Conclusion
- Implementation and example code
- Change
- Related Mode
(Author: Douglas C. Schmidt,ByHuihoo.org (translated by Thzhang) on the CORBA subject, compiled by Allen)
Intention
No matter when the code in the critical section only needs to be locked once, and when it obtains the lock, it must be thread-safe. You can use the Double Checked Locking mode to reduce competition and lock load.
Motivation
1. Standard Singleton. It is difficult to develop correct and effective concurrent applications. Programmers must learn new technologies (concurrency control and deadlock prevention algorithms) and mechanisms (such as multithreading and synchronous APIs ). In addition, many familiar design patterns (such as singleton and iterator) can work well in an ordered program that does not use any concurrency context assumptions. To illustrate this, consider the implementation of a standard Singleton mode in a multi-threaded environment. The Singleton mode ensures that only one instance of a class provides a globally unique entry point to access this instance. Dynamic Allocation of Singleton objects in c ++ programs is a common method, because the c ++ program does not have a good definition of the initialization sequence of static global objects, so it cannot be transplanted. In addition, Dynamic Allocation avoids the initialization overhead of a singleton object when it is never used.
class Singleton{public:static Singleton *instance (void){if (instance_ == 0)// Critical section.instance_ = new Singleton;return instance_;}void method (void);// Other methods and members omitted.private:static Singleton *instance_;};
Before using the operations provided by the singleton object, the application code calls the static instance method to obtain the reference of the singleton object, as shown below:
Singleton: instance ()-> method ();
2. Problem: competitive conditions. Unfortunately, the implementation of the standard Singleton mode shown above cannot work properly in the preemptive multi-task and real parallel environments. For example, if multiple threads running on the parallel host call the Singleton: instance method before the Singleton object initialization, the Singleton constructor will be called multiple times, this is because multiple threads will execute the new singleton operation in the critical section shown above. The critical section is a sequence of commands that must comply with the following conventions: When a thread/process runs in the critical section, no other thread/process will run in the critical section at the same time. In this example, the initialization process of a singleton is a critical section. In violation of the critical section principle, memory leakage will occur in the best cases. In the worst case, if the initialization process is not idempotent (idempotent .), this will cause serious consequences.
3. Common traps and drawbacks. The common method to implement the critical section is to add a static Mutex object to the class. This Mutex ensures that the allocation and initialization of a single instance are atomic operations, as follows:
class Singleton{public:static Singleton *instance (void){// Constructor of guard acquires lock_ automatically.Guard guard (lock_);// Only one thread in the critical section at a time.if (instance_ == 0)instance_ = new Singleton;return instance_;// Destructor of guard releases lock_ automatically.}private:static Mutex lock_;static Singleton *instance_;};
The guard class uses a c ++ usage. When an object instance of this class is created, it uses the constructor to automatically obtain a resource. When the class object leaves a region, use the destructor to automatically release this resource. By using guard, every access to the Singleton: instance method will automatically obtain and release the lock _.
Even if this critical section is used only once, each call to the instance method must obtain and release the lock _. Although the current implementation is thread-safe, too much locking load is unacceptable. An obvious (although incorrect) optimization method is to place guard inside the condition detection for the instance:
static Singleton *instance (void){if (instance_ == 0) {Guard guard (lock_);// Only come here if instance_ hasn't been initialized yet.instance_ = new Singleton;}return instance_;}
This reduces the locking load, but does not provide thread-safe initialization. In multi-threaded applications, there are still competition conditions, which will lead to multiple instance _ initialization _. For example, if both threads detect instance _ = 0 at the same time, both of them will succeed, and one will get lock _ through guard and the other will be blocked. When the first thread initializes Singleton and releases lock _, the blocked thread will get lock _ and initialize Singleton again with an error.
4. Solution: Double Checked Locking optimization. A better way to solve this problem is to use Double Checked Locking. It is an optimization mode used to clear unnecessary locks. It is ironic that its implementation is almost the same as the previous method. To avoid unnecessary locks, wrap the call to new in another condition detection:
class Singleton{public:static Singleton *instance (void){// First checkif (instance_ == 0){// Ensure serialization (guard constructor acquires lock_).Guard guard (lock_);// Double check.if (instance_ == 0)instance_ = new Singleton;}return instance_;// guard destructor releases lock_.}private:static Mutex lock_;static Singleton *instance_;};
The first thread to get the lock _ will construct Singleton and assign the pointer to instance _. The thread that calls the instance method will find instance _! = 0, so the initialization process will be skipped. If multiple threads attempt to initialize Singleton concurrently, the second detection prevents the occurrence of the competition condition. In the code above, these threads will queue on lock _. When the queuing thread finally gets lock _, they will find instance _! = 0. the initialization process will be skipped.
Singleton: The implementation of the instance is only implemented when Singleton is initialized for the first time. If multiple threads enter the instance method at the same time, it will cause the lock load. In the future, Singleton: instance is called because instance _! = 0 without locking or unlocking the load. By adding a mutex and a secondary condition detection, the standard Singleton implementation can be thread-safe without generating excessive initialization locks.
Adaptability
> When an application has the following features, you can use the Double Checked Locking optimization mode:
1. An application contains one or more critical code that needs to be executed sequentially.
2. Multiple Threads may attempt to execute critical zones concurrently.
3. The critical section only needs to be executed once.
4. Locking each access to the critical section will lead to excessive locking loads.
5. Adding a lightweight and reliable condition detection within the scope of a lock is feasible.
Structure and participants
The structure and participants of the Double Checked Locking mode are best displayed by Using Pseudo code. Figure 1 shows the following participants in the Double Checked Locking mode:
1. There is only one Critical Section (Just Once Critical Section ,). The Code contained in the critical section is executed only once. For example, a singleton object is initialized only once. In this way, the execution of the new Singleton call (only once) is very rare compared to the access to the Singleton: instance method.
2. mutex. The lock is used to serialize access to code in the critical section.
3. Mark. Mark indicates whether the code in the critical section has been executed. In the preceding example, instance _ is used as a tag.
4. Application Thread. The thread that attempts to execute the critical code.
Collaboration
Figure 2 shows the interaction between participants in Double Checked Locking mode. As a common optimization case, the application thread first checks whether the flag has been set. If it is not set, mutex is obtained. After holding the lock, the Application Thread checks whether the flag is set again to implement the Just Once Critical Section and sets the flag to true. Finally, the application thread releases the lock.
Conclusion
The Double Checked Locking mode provides the following benefits:
1. Minimize locking. By implementing two flag detection, the Double Checked Locking mode is used to optimize common use cases. Once the flag is set, the first check will ensure that the subsequent access will not be locked.
2. prevent competition conditions. The second flag detection ensures that only one event is implemented in the critical section.
Using the Double Checked Locking mode will also bring about a disadvantage: the potential for subtle bug porting. This subtle porting problem can cause a fatal bug if software using the Double Checked Locking mode is ported to a hardware platform without atomic pointers and positive value assignment semantics. For example, if an instance _ pointer is used as the flag implemented by Singleton, all bits in the instance _ pointer must be read and written in one operation. If writing new results into memory is not an atomic operation, other threads may try to read a sound pointer, which will lead to illegal memory access.
This is possible in some systems that allow the memory address to span the alignment boundary. Therefore, you need to retrieve the memory twice for each access. In this case, the system may use the separated word alignment to synthesize the flag to indicate the instance _ pointer.
If an overly aggressive (aggressive) compiler optimizes the flag by some caching means, or removes the second flag = 0 detection, there will be another issue. The following describes how to use the volatile keyword to solve this problem.
Implementation and example code
ACE uses the Double Checked Locking mode in multiple library components. For example, to reduce code duplication, ACE uses a reusable adapter ACE Singleton to convert a common class into a class with a single routine. The following code demonstrates how to use the Double Checked Locking mode to implement ACE Singleton.
// A Singleton Adapter: uses the Adapter// pattern to turn ordinary classes into// Singletons optimized with the// Double-Checked Locking pattern.template class ACE_Singleton{public:static TYPE *instance (void);protected:static TYPE *instance_;static LOCK lock_;};template TYPE *ACE_Singleton::instance (){// Perform the Double-Checked Locking to// ensure proper initialization.if (instance_ == 0) {ACE_Guard lock (lock_);if (instance_ == 0)instance_ = new TYPE;}return instance_;}
The ACE Singleton class is parameterized by TYPE and LOCK. Therefore, a class with a given TYEP will be converted to a class with a single routine that uses the mutex of the LOCK type.
Token Manager in ACE is an example of using ACE Singleton. Token Manager detects local and remote token (a recursive lock) deadlocks in multi-threaded applications. To reduce resource usage, Token Manager is created as needed. To create a Token Manager object for a single instance, you only need to implement the following typedef:
Typedef ACE_Singleton Token_Mgr;
The Token Manager Singleton is used for local and remote token Deadlock Detection. Before a thread blocks and waits for mutex, it first queries the Token Manager Singleton to test whether blocking will cause a deadlock. For each token in the system, the Token Manager Singleton maintains a linked list of threads holding the token thread and all blocked threads waiting for the token. This data will provide a sufficient basis for detecting the deadlock status. The process of using the Token Manager Singleton is as follows:
// Acquire the mutex.int Mutex_Token::acquire (void){// If the token is already held, we must block.if (mutex_in_use ()) {// Use the Token_Mgr Singleton to check// for a deadlock situation *before* blocking.if (Token_Mgr::instance ()->testdeadlock ()) {errno = EDEADLK;return -1;}else// Sleep waiting for the lock...// Acquire lock...}
Change
A variable implementation of the Double Checked Locking mode may be required if a compiler optimizes the flag in some buffer mode. In this case, the coherency of the buffer will become a problem. If the flag is copied to the registers of multiple threads, inconsistency will occur. If a thread changes the flag value, it cannot be reflected in the corresponding copy of other threads.
Another issue is that the compiler removes the second flag = 0 check because it is redundant for compilers that hold highly optimized features. For example, the following code will be skipped from reading the flag in a radical compiler. Instead, it is assumed that instance _ is still 0 because it is not declared as volatile.
Singleton *Singleton::instance (void){if (Singleton::instance_ == 0){// Only lock if instance_ isn't 0.Guard guard (lock_);// Dead code elimination may remove the next line.// Perform the Double-Check.if (Singleton::instance_ == 0)// ...
One way to solve these two problems is that the life flag is the volatile member variable of Singleton, as follows:
Private:
Static volatile long Flag _; // Flag is volatile.
Using volatile ensures that the compiler does not buffer the flag to the compiler, and does not optimize the second read operation. The keyword volatile indicates that all access to the flag is through memory rather than through registers.
Related Mode
The Double Checked Locking mode is a change In the First-Time-In usage. The First-Time-In usage is often used In programming languages such as c without constructor. The following code shows this mode:
static const int STACK_SIZE = 1000;static T *stack_;static int top_;void push (T *item){// First-time-in flagif (stack_ == 0) {stack_ = malloc (STACK_SIZE * sizeof *stack);assert (stack_ != 0);top_ = 0;}stack_[top_++] = item;// ...}
When the first push is called, stack _ is 0, which will trigger malloc to initialize itself.