C + + Concurrent programming mutex and synchronization

Source: Internet
Author: User

C + + Concurrent programming Asynchronous task (Async)

Thread basic mutex and synchronization tool classes, mainly include:
Std::mutex class
Std::recursive_mutex class
Std::timed_mutex class
Std::recursive_timed_mutex class
Std::lock_guard type template
Std::unique_lock type template
Std::lock function templates
Std::once_flag class
std::call_once function templates

Std::mutex class

Std::mutex locks are required to call lock () or Try_lock (), and when one thread acquires a lock, other threads are blocked (lock) or fail (Try_lock) when they want to acquire the lock on this object. When the thread finishes protecting the shared data, it needs to call unlock to release the lock.
Std::mutex is not nested, and if lock is called two times, undefined behavior is generated.

Std::recursive_mutex class

Use the same method as Std::mutex, but Std::recursive_mutex supports one thread to get the same mutex multiple times without releasing it once. But within the same thread, lock is equal to unlock number of times, otherwise the other threads will not get any chance.
The principle is that when you call lock, the count increases by 1 when the calling thread already holds the lock. When calling Try_lock, try to get the lock, the failure will not block, the success of the Count plus 1; When unlock is called, the count is reduced by 1, and if it is the last lock, the lock is released.
It is important to note that when calling Try_lock, if the current thread does not acquire a lock, it may fail even if no other thread has taken the lock.

Std::timed_mutex class

The Std::timed_mutex supports the lock timeout on a std::mutex basis. You can call Try_lock_for when locked, Try_lock_until set the timeout value.
The try_lock_for parameter is the time to wait, and returns immediately when the parameter is less than or equal to 0, and the effect is the same as using Try_lock.
Try_lock_until The parameter passed in cannot be less than the current time, otherwise it will be returned immediately, with the same effect as using Try_lock. In fact, Try_lock_for internally is also called Try_lock_until implementation.
Tm.try_lock_for (Std::chrono::milliseconds (1000)) and Tm.try_lock_until (Std::chrono::steady_clock::now () + std:: Chrono::milliseconds (1000)) is equivalent to waiting for 1s.

Std::recursive_timed_mutex class

Std::recursive_timed_mutex on the basis of Std::recursive_mutex, let the lock support timeout.
Use the same Std::timed_mutex, the time-out principle with Std::recursive_mutex.

Std::lock_guard type template

The Std::lock_guard type template is the underlying lock wrapper ownership. The specified mutex is locked in the constructor and unlocked in the destructor.
This provides a simple way for the mutex to lock part of the code: when the program is finished, the block is unblocked, the mutex is unlocked (whether it is executed to the last, or through the control flow statement break or return, or throws an exception).
Std::lock_guard does not support copy construction, copy assignment, and mobile construction.

Std::unique_lock type template

The Std::unique_lock type template provides a more generic ownership wrapper than Std::loc_guard.
Std::unique_lock can call unlock to release the lock, and then call Lock () when it needs to access the shared data again, but must be aware that the lock corresponds to unlock once, and cannot call the same lock or unlock multiple times in a row.
Std::unique_lock does not support copy construction and copy assignment, but it supports move constructs and move assignments.
Std::unique_lock also adds several other ways of structuring than Std::loc_guard:
Unique_lock (_mutex& _mtx, adopt_lock_t) constructs an instance of holding a lock, which does not call lock or try_lock, but is called by default when it is destructor.
Unique_lock (_mutex& _mtx, defer_lock_t) constructs a non-holding lock instance, which does not call lock or try_lock, and does not invoke Std::lock if the flag is not modified using functions such as unlock.
Unique_lock (_mutex& _mtx, try_to_lock_t) attempts to acquire a lock from the mutex by calling Try_lock
Unique_lock (_mutex& _mtx, const chrono::d uration<_rep, _period>& _rel_time) try to acquire a lock for a given length of time
Unique_lock (_mutex& _mtx, const chrono::time_point<_clock, _duration>& _abs_time) attempts to acquire a lock at a given point in time
BOOL Owns_lock () const checks for a lock on a mutex

Std::lock function templates

The Std::lock function template provides the ability to lock multiple mutexes simultaneously without deadlock caused by changing the lock's consistency. The statement is as follows:
Template<typename lockabletype1,typename ... lockabletype2> void Lock (lockabletype1& m1,lockabletype2& m2 ...);

    //protect code with mutextypedef std::lock_guard<std::mutex>Mutexlockguard; typedef std::unique_lock<std::mutex>Uniquelockguard; classFunc {inti; Std::mutex&m;  Public: Func (intI_, std::mutex&m_): I (I_), M (m_) {}void operator() ()        {            //Mutexlockguard lk (m);Uniquelockguard lk (m);  for(Unsigned j =0; J <Ten; ++j) {Std::cout<< I <<" "; } std::cout<<Std::endl;    }    };    Std::mutex m; Std::vector<std::thread>threads;  for(inti =1; I <Ten; i++) {Func f (i, m);    Threads.push_back (Std::thread (f)); } Std::for_each (Threads.begin (), Threads.end (), STD::MEM_FN (&std::thread::join));//call join () for each thread
    //Lock multiple mutexes at the same timeStd::mutex M1;    Std::mutex m2; //std::unique_lock<std::mutex> LOCK_A (M1, std::d efer_lock); //std::unique_lock<std::mutex> Lock_b (M2, std::d efer_lock);//std::d ef_lock leave unlocked mutex//Std::lock (lock_a, lock_b);//The mutex is locked here, and the object's lock flag is modified.STD::Lock(M1, M2);//Lock two MutexStd::lock_guard<std::mutex> LOCK_A (M1, Std::adopt_lock);//The Std::adopt_lock parameter indicates that the object is locked, so the lock function is not calledStd::lock_guard<std::mutex> Lock_b (M2, std::adopt_lock);
std::call_once function templates

If multiple threads need to call a function at the same time, std::call_once can guarantee that multiple threads will only be called once for the function and are thread-safe.

Thread-Safe Deferred initialization

--Using Std::call_once and Std::once_flag

Consider the following code, where each thread must wait for the mutex to determine that the data source has been initialized, which causes the thread resource to produce unnecessary serialization problems.

 std::shared_ptr<some_resource> resource_ptr;        Std::mutex Resource_mutex;  void   Foo () {std::unique_lock< /span><std::mutex> LK (Resource_mutex); //             All threads are serialized in this  if  (!            resource_ptr) {Resource_ptr.reset ( new  Some_resource); //              Lk.unlock ();        Resource_ptr ->do_something (); }

Using a double-check lock to optimize the above code, the first time the pointer reads the data does not need to acquire the lock, and only if the pointer is null, you need to acquire the lock; Then, when the lock is acquired, the pointer is checked again (this is the part of the double check), preventing the other thread from initializing after the first check and having the current thread acquire the lock.
There is also a problem, the potential condition competition, because the external read lock ① is not synchronized with the internal write lock ③, so there is a conditional competition, which not only overrides the pointer itself, but also affects the object it points to:
Even if one thread knows that another thread is finished writing to the pointer, it may not see the newly created Some_resource instance, and then call Do_something () ④ to get incorrect results. This is specified in the C + + standard as "undefined behavior."

        voidundefined_behaviour_with_double_checked_locking () {if(!RESOURCE_PTR)//1{Std::lock_guard<std::mutex>LK (Resource_mutex); if(!RESOURCE_PTR)//2{Resource_ptr.reset (NewSome_resource);//3}} resource_ptr->do_something ();//4}

Solutions for C + +:

        Std::shared_ptr<some_resource> resource_ptr;         // 1                void Init_resource ()        {            Resource_ptr.reset (new  some_resource);        }         void foo ()        {            //  can be fully initialized once            resource_ptr->do_something ();        }

Deferred initialization for thread-safe class members

        classX {Private: Connection_info connection_details;            Connection_handle connection;            Std::once_flag Connection_init_flag; voidopen_connection () {connection=Connection_manager.open (connection_details); }         Public: X (Connection_infoConst&connection_details_): Connection_details (Connection_details_) {}voidSend_data (Data_packetConst& data)//1{std::call_once (Connection_init_flag,&x::open_connection, This);//2connection.send_data (data); } data_packet Receive_data ()//3{std::call_once (Connection_init_flag,&x::open_connection, This);//2                returnConnection.receive_data (); }        };
Boost::shared_lock

Reader-writer lock Boost::shared_lock, which allows for two different ways of using: an "author" thread exclusive access and shared access, allowing multiple "reader" threads to access concurrently. (C++11 standard not supported)
Its performance depends on the number of processors involved, as well as the load on the reader and writer threads. A typical application:

#include <map>#include<string>#include<mutex>#include<boost/thread/shared_mutex.hpp>classDns_entry; classDns_cache {std::map&LT;STD::string, dns_entry>entries;    mutable Boost::shared_mutex Entry_mutex;  Public: Dns_entry find_entry (std::string Const& domain)Const{Boost::shared_lock<boost::shared_mutex> LK (Entry_mutex);//1STD::MAP&LT;STD::string, Dns_entry>::const_iteratorConstit =entries.find (domain); return(It = = Entries.end ())? Dns_entry (): it->second; }        voidUpdate_or_add_entry (std::string Const&Domain, Dns_entryConst&dns_details) {Std::lock_guard<boost::shared_mutex> LK (Entry_mutex);//2Entries[domain] =dns_details;    }    }; 

C + + Concurrent programming mutex and synchronization

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.