Single-instance mode (singleton) requires a class to have only one instance, and how to guarantee that only one instance is created. Static member delay initialization of a class requires that static members be initialized only once, and there are similar problems.
In a single thread environment, this is a good thing to do.
singleton* singleton::getinstance () {
if (m_instance = = nullptr) {
m_instance = new Singleton;
}
However, in a multithreaded environment, the above approach is clearly unsafe, and it may cause multiple threads to create multiple different instances at the same time, and other instances are discarded in addition to the last instance being referenced, causing a memory leak. scope-based Lock
So if a single Instance object (Singleton) is used securely in multithreaded programming, the easiest thing to do is to lock the function on access, in this way, assuming that multiple threads call the Singleton::getinstance method at the same time, the first thread to get the lock is responsible for creating the instance, Other threads return directly to the instance that you have created:
singleton* singleton::getinstance () {
//Lock is a scope-based lock (scope-based lock), the scope is terminated automatically, The equivalent of the Synchronized keyword in Java to play a role,
//In this case, the function returned when the scope end, the equivalent of the function lock, boost Scope_lock class to achieve this feature
lock lock;
if (m_instance = = nullptr) {
m_instance = new Singleton;
}
return m_instance;
}
This method is undoubtedly safe, but when an instance is created, it does not actually need to be locked, and while locking does not necessarily lead to poor performance, it can also cause slow response in heavy load situations. So for the pursuit of the perfect person, this approach is indeed a bit unpleasant ah. Double check lock mode (DCLP)
To address the problem of multiple locks on single instance initialization above, programmers have come up with a dual-check lock mode (DCLP), which is estimated to be the way you think.
The code prototype is as follows:
singleton* singleton::getinstance () {
if (pinstance = = nullptr) {
//Check lock lock for the first time
;
if (pinstance = = nullptr) {
// second check
pinstance = new Singleton;
}
}
return pinstance;
}
In the code above, the first check does not have a lock, and avoids the problem of locking every time the getinstance is invoked. It seems that this method is perfect, logically airtight, but in-depth study will find that DCLP is not reliable. For specific reasons see this below, very detailed.
C + + and dual-check locking mode (DCLP) risk
After reading this article, we can draw a conclusion: because the C + + compiler in the compilation process will optimize the code, so the actual sequence of code execution may be disrupted, in addition because the CPU has a level two cache, the CPU calculation results are not timely update to memory, so in the multithreaded environment , there is a visibility problem with shared memory data between different threads, which can lead to the risk of using DCLP.
About the data visibility between multithreading, it is necessary to involve the C + + memory model (memory models) of the topic, this matter is really not easy to say, recommend a more simple and easy to understand the article
A ramble on c++11 multithreaded memory model memory Fence/barrier
In the last section, we know that the double check locking mode is risky, so there is no way to improve it.
The approach is to have, this is the Memory barrier technology (memory fence), also known as the Memory Barrier (memory barrier)
The role of the memory fence is to ensure the relative order of memory operations, but does not guarantee strict timing of memory operations, ensuring that the data updated by the first thread is visible to other threads.
A memory access operation prior to a memory fence must precede its completion
Detailed concepts on the memory fence see:
Understanding Memory Barrier (memory barrier)
The following is the use of memory fence technology to implement DCLP pseudocode
singleton* singleton::getinstance () {
singleton* tmp = m_instance;
...
Insert Memory Fence Instruction
if (tmp = = nullptr) {
lock lock;
TMP = m_instance;
if (tmp = = nullptr) {
tmp = new Singleton;//Statement 1
...
Insert the memory Fence directive to ensure that when statement 2 executes, the object that TMP points to has completed the initialization constructor
m_instance = tmp;//statement 2
}
} return
tmp;
}
Here we can see that when the m_instance pointer is null, we do a lock that ensures that the thread that created the object is visible to the other thread for the m_instance operation. In the create thread internal building block, M_instance is checked again to ensure that the thread creates only one copy of the object. atomic_thread_fence
About memory fence, different CPUs, different compilers have different implementations, if the direct use of the trouble, however, the concept of c++11 in the abstract, provides a convenient way to use
In c++11, you can obtain (Acquire/consume) and release memory fences to achieve these functions. Use the atomic type in c++11 to wrap the m_instance pointer, which makes the operation of M_instance an atomic operation. The following code shows how to use the memory fence:
Std::atomic<singleton*> singleton::m_instance;
Std::mutex Singleton::m_mutex;
singleton* singleton::getinstance () {
singleton* tmp = M_instance.load (std::memory_order_relaxed);
Std::atomic_thread_fence (std::memory_order_acquire);
if (tmp = = nullptr) {
std::lock_guard<std::mutex> lock (M_mutex);
TMP = M_instance.load (std::memory_order_relaxed);
if (tmp = = nullptr) {
tmp = new Singleton;
Std::atomic_thread_fence (std::memory_order_release);
M_instance.store (TMP, std::memory_order_relaxed);
}
return tmp;
}
The code above atomic_thread_fence a "sync-and" relationship (Synchronizes-with) between creating an object thread and using an object thread.
The following is an excerpt from Cplusplus about the Atomic_thread_fence function:
Establishes a multi-thread fence:the point of call to this function becomes either a acquire or a release synchronizatio N Point (or both).
All visible side effects of the releasing thread that happen before the "call to" function are synchronized to also H Appen before the call this function in the acquiring thread.
Calling this function has the same effects as a-load or store atomic operation, but without involving an atomic value
The idea is to create a multithreaded fence where the call to the function becomes a sync point (acquire or release or both).
In the release thread, the data prior to this synchronization point will be synchronized to the sync point of the acquire thread, which enables the thread visibility to be consistent atomic
The code in the previous section uses the memory fence locking technique to easily implement double check locking. But looking at it is a bit of a hassle, the better way to do this in c++11 is to use atomic operations directly.
Std::atomic<singleton*> singleton::m_instance;
Std::mutex Singleton::m_mutex;
singleton* singleton::getinstance () {
singleton* tmp = M_instance.load (std::memory_order_acquire);
if (tmp = = nullptr) {
std::lock_guard<std::mutex> lock (M_mutex);
TMP = M_instance.load (std::memory_order_relaxed);
if (tmp = = nullptr) {
tmp = new Singleton;
M_instance.store (TMP, std::memory_order_release);
}
return tmp;
}
If you are still not clear about the concept of memory_order, then use C + + sequential atomic operations, all std::atomic operations if not with parameters default is STD::MEMORY_ORDER_SEQ_CST, The sequential atomic operation (sequentially consistent), referred to as SC, uses the (SC) Atomic operation Library, the entire function execution instruction will guarantee sequential execution, this is one of the most conservative memory model strategy.
The following code is the use of SC atomic operation to achieve double check lock
Std::atomic<singleton*> singleton::m_instance;
Std::mutex Singleton::m_mutex;
singleton* singleton::getinstance () {
singleton* tmp = M_instance.load ();
if (tmp = = nullptr) {
std::lock_guard<std::mutex> lock (M_mutex);
TMP = M_instance.load ();
if (tmp = = nullptr) {
tmp = new Singleton;
M_instance.store (TMP);
}
return tmp;
}
Call_once (The simplest implementation)
It's a little spicy. For a single instance of initialization, too complicated, to tell the truth I see the above content also spent a few days to supplement all kinds of knowledge, feel finally understand, you can stop, but my mind suddenly flashed a name, "Call_once" ...
This is a function that was seen when the previous c++11 the standard header file "Mutex", so I hurried to check the data,
The following is the original description of Std::call_once:
From:std::call_once@cplusplus.com
CALLS fn Passing args as arguments, unless another thread has already (or are executed currently) a call to executing Once with the same flag.
If Another thread is already actively executing a call to call_once with the same flag, it causes a passive sive executions do don't call FN but does not return until the active execution itself has, and returned visible CTS are synchronized at "Point among all concurrent calls to" this function with the same flag.
If an active call to call_once ends by throwing a exception (which is propagated to its calling thread) and passive EXECU tions exist, one is selected among this passive executions, and called to being the new active call instead.
Note This is once an active execution has returned, "all" passive executions and future calls to Call_once (with the SA Me flag) also return without becoming active executions.
The active execution uses decay copies of the Lvalue or rvalue references of FN and args, ignoring the value returned by F N.
ALSO:
call_once function @microsoft
Std:call_once@cppreference.com
The effect is
Call_one Guarantee function fn is only executed once, and if more than one thread executes the function fn call at the same time, only one active thread (active call) executes the function, and the other threads are "passive execution" (passive execution state) before the thread performs the return. -Does not return directly until the active thread ends the FN call. The data visibility of all concurrent threads that invoke the FN is synchronized (consistent).
If an active thread throws an exception when it executes FN, it picks a thread from a thread that is in the "passive execution" state as the active thread continues to execute FN, and so on.
Once the active thread returns, all threads of the "passive execution" state are returned and will not become active threads.
As explained above, we can be sure that call_once fully satisfies the requirements for data visibility in multithreaded states.
So with call_once and lambda expressions, there are so many complex codes in the preceding sections, where the words are condensed into a word:
singleton* singleton::m_instance;
singleton* singleton::getinstance () {
static std::once_flag oc;//The local static variable call_once for std::call_once
(OC, [ [] { m_instance = new Singleton ();});
return m_instance;
}
Summary
Some of the methods mentioned in this article are safe and available, specifically, I think the call_once is the simplest, I definitely choose Call_one. But does not represent the front of so many white writing, in fact, learning each method in the process let me have a more in-depth understanding of the C++11 memory model, this is the biggest harvest.
In writing this article, I refer to the following article, especially to the author expressed appreciation
C++11 call once in multiple threads
C++11 fixes a double check lock problem