Wait for an event or other condition
first, it can continue to check for shared data flags (the mutex used to do the protection work) until another thread finishes working with the flag reset.
The second option is to wait for the thread to check the gap, using std::this_thread::sleep_for () for periodic intervals:
#include <iostream> #include <thread> #include <mutex>class wait_test {bool Flag;std::mutex m;public: Wait_test (bool _flag): Flag (_flag) {}void setflag (bool _flag) {std::unique_lock<std::mutex> LK (m); flag = _flag;} BOOL Getflag () {std::unique_lock<std::mutex> lk (m); return flag;} void Wait_for_flag () {std::unique_lock<std::mutex> lk (m); while (!flag) {lk.unlock ();//1 Unlock Mutex Std::this_thread: : Sleep_for (Std::chrono::milliseconds (100)); 2 sleep 100mslk.lock (); 3 Re-lock mutex}}};void Funa (wait_test &wt, int &i) {while (!wt.getflag ()) {++i;}} void Funb (wait_test &wt, int &i) {std::cout << "begin\t" << I << std::endl;wt.wait_for_flag (); /wait for main thread setstd::cout << "end\t" << i << Std::endl;} int main () {Wait_test wt{false};int i{0};std::thread t1{funa,std::ref (WT), Std::ref (i)}, t2{funb,std::ref (WT), std::re F (i)};t1.detach (); T2.detach (); Wt.setflag (true); System ("pause"); return 0;}
The third option (and the preferred choice) is to use the tools provided by the C + + standard library to wait for events to occur. The mechanism for triggering a wait event through another thread is the most basic wake-up method (for example, when there are additional tasks on the pipeline), which is called "
condition Variable"(condition variable). Conceptually,
a condition variable is associated with multiple events or other conditions, and one or more threads wait for the condition to be reached. When certain threads are terminated, threads that terminate in order to wake up the waiting thread (allowing the waiting thread to continue) will broadcast the "condition achieved" message to the waiting thread.
Wait for the condition to reachthe C + + standard library has two sets of implementations for condition variables: std::condition_variable and Std::condition_variable_any. Both implementations are included in the declaration of the <condition_variable> header file. Both need to work with a mutex (the mutex is for synchronization), the former is limited to working with Std::mutex, and the latter can work with any mutex that satisfies the minimum standard, adding the _any suffix. Because Std::condition_variable_any is more generic, this can incur additional overhead from volume, performance, and the use of system resources, so std::condition_variable is generally the preferred type when there is a rigid requirement for flexibility , we will consider Std::condition_variable_any.
processing data waits using std::condition_variable
#include <iostream> #include <thread> #include <mutex> #include <queue>struct data_chunk {int m;}; struct A {Std::mutex mut;std::queue<data_chunk> data_queue;//1std::condition_variable Data_cond;bool More_data _to_prepare () {return data_queue.size () < 10;} BOOL Is_last_chunk () {return data_queue.size () = = 3;}}; int i = 0;data_chunk prepare_data () {Data_chunk r;r.m = ++i;return r;} void Data_preparation_thread (A &a) {std::cout << "preparation Begin" << Std::endl;while (a.more_data_to _prepare ()) {const DATA_CHUNK data = Prepare_data ();std::lock_guard<std::mutex> lk (A.mut); A.data_queue.push ( data); 2std::cout << "Preparation notify" << Std::endl;a.data_cond.notify_one (); 3}std::cout << "Preparation End" << Std::endl;} void process (const data_chunk &d) {std::cout << d.m << Std::endl;} void Data_processing_thread (A &a) {while (true) {std::unique_lock<std::mutex> lk (A.mut);//4a.data_ Cond.wait (LK, [&a] {return!a.data_queue.empty ();}); 5std::cout << "Process wait End" << std::endl;data_chunk data = A.data_queue.front (); A.data_queue.pop (); l K.unlock (); 6process (data); if (A.is_last_chunk ()) break;}} int main () {a a;std::thread t1{data_preparation_thread,std::ref (a)},t2{data_processing_thread,std::ref (a)};t1.join (); T2.join (); System ("pause"); return 0;}
This is called "pseudo-wake" (spurious wakeup) when waiting for the thread to regain the mutex and check the condition, if it is not directly responding to another thread's notification. Because the number and frequency of any pseudo-wakes are indeterminate, it is not recommended to use a function that has side effects to do a condition check. When you do this, you have to be prepared for many side effects. A thread waiting with Std::unique_lock instead of using std::lock_guard--must unlock the mutex during the wait, and then lock the mutex again after that, and std::lock_guard is not so flexible. The flexibility to unlock std::unique_lock is not only applicable to calls to wait (); it can also be used for data that needs to be processed but not yet processed.
Building a thread-safe queue using condition variables
Use the thread-safe queue (full version) of the condition variable and incorporate the previous program:
#include <queue> #include <memory> #include <mutex> #include <condition_variable>// Header file Template<typename t>class threadsafe_queue{private:mutable Std::mutex mut;//1 mutex must be variable std::queue<t> Data_queue;std::condition_variable Data_cond;public:threadsafe_queue () {}threadsafe_queue (Threadsafe_queue const & other) {std::lock_guard<std::mutex> lk (other.mut);d ata_queue = Other.data_queue;} void push (T new_value) {std::lock_guard<std::mutex> lk (mut);d Ata_queue.push (new_value);d Ata_cond.notify_one ( );} void Wait_and_pop (t& value) {std::unique_lock<std::mutex> lk (mut);d ata_cond.wait (LK, [this] {return!data_ Queue.empty ();}); Value = Data_queue.front ();d ata_queue.pop ();} Std::shared_ptr<t> wait_and_pop () {std::unique_lock<std::mutex> lk (mut);d ata_cond.wait (LK, [this] { return!data_queue.empty ();}); std::shared_ptr<t> Res (std::make_shared<t> (Data_queue.front ()));d Ata_queue.pop (); return res;} BOOL Try_pop (t& value) {Std::lock_Guard<std::mutex> LK (mut); if (Data_queue.empty ()) Return false;value = Data_queue.front ();d ata_queue.pop (); return true;} Std::shared_ptr<t> try_pop () {std::lock_guard<std::mutex> lk (mut), if (Data_queue.empty ()) return std:: Shared_ptr<t> ();std::shared_ptr<t> res (std::make_shared<t> (Data_queue.front ()));d Ata_ Queue.pop (); return res;} bool Empty () Const{std::lock_guard<std::mutex> lk (mut); return Data_queue.empty ();}}; Threadsafe_queue<data_chunk> Data_queue; 1void Data_preparation_thread () {while (More_data_to_prepare ()) {data_chunk Const data = Prepare_data ();d Ata_ Queue.push (data); 2}}void Data_processing_thread () {while (true) {Data_chunk data;data_queue.wait_and_pop (data);//3process (data); if (Is_last_chunk (data)) break;}}
Use expectations to wait for one-time eventsThe C + + standard library model calls this one-off event an "expectation" (future). This "expectation" cannot be reset when an event occurs (and the desired state is ready).
In the C + + standard library, there are two kinds of "expectations", implemented using two types of templates, declared in the header file: Unique Expectations (uniquefutures) (std::future<>) and shared Expectations (Futures) (std:: shared_future<>). In the latter implementation, all instances become ready at the same time, and they can access any data related to the event. This data association is related to templates, such as template parameters for Std::unique_ptr and std::shared_ptr, which are associated data types. In the data-independent
Place where you can use the std::future<void> template with std::shared_future<void>.
Background task with return valueYou can use Std::async to start an asynchronous task when the results of the task are not in your hurry. Unlike the Std::thread object waiting to run, Std::async returns a Std::future object that holds the final computed result. When you need this value, you only need to call the get () member function of the object, and the thread will block until the expected state is ready, and then return the result of the calculation.
Use Std::future to get the return value from an asynchronous task:
#include <future> #include <iostream>int find_the_answer_to_ltuae () {std::this_thread::sleep_for (std:: Chrono::milliseconds (+)); return 10;} void Do_other_stuff () {std::this_thread::sleep_for (Std::chrono::milliseconds (120));} int main () {std::future<int> the_answer = Std::async (find_the_answer_to_ltuae);d o_other_stuff (); Std::cout < < "The answer is" << the_answer.get () << std::endl;system ("pause"); return 0;}
Std::async allows you to pass additional parameters to the function by adding additional invocation parameters. When the first argument is a pointer to a member function, the second parameter provides the specific object of the function member class (not directly, through pointers, and also wrapped in std::ref), and the remaining parameters can be passed in as arguments to the member function. Otherwise, the second and subsequent arguments are used as arguments to the function, or as the first parameter of the specified callable object.
using Std::async to pass parameters to a function
#include <string> #include <future> #include <iostream>struct x{int m;void foo (int i, std::string const & s) {std::cout << s << "\ t" << i << Std::endl;} std::string Bar (std::string const &s) {return "bar (" +s+ ")";}}; struct y{double operator () (double D) {return D + 1.1;}}; X Baz (x& _x) {++_x.m;return _x;} Class move_only{public:move_only () = Default;move_only (move_only&&) = Default;move_only (move_only const& ) = delete;move_only& operator= (move_only&&) = default;move_only& operator= (move_only const&) = Delete;void operator () () {std::cout << "move_only ()" << Std::endl;}; void Fun () {X X;auto f1 = Std::async (&x::foo, &x, A, "Hello"),//Call P->foo ("Hello"), p is a pointer to X, pointer auto F2 = St D::async (&x::bar, X, "Goodbye"); Call Tmpx.bar ("Goodbye"), tmpx is a copy of X, specific object std::cout << f2.get () << Std::endl; Y Y;auto F3 = Std::async (Y (), 3.141); Call Tmpy (3.141), tmpy through the move constructor of y get Std::cout << F3.get () << Std::endl;auto f4 = Std::async (Std::ref (y), 2.718); Call Y (2.718) std::cout << f4.get () << std::endl;x.m = 1;auto f5 = Std::async (Baz, Std::ref (x)); Call Baz (x) std::cout << f5.get (). m << Std::endl;auto f6 = Std::async (Move_only ()); Call TMP (), TMP is constructed by Std::move (Move_only ()) to get}int main () {fun (); system ("pause"); return 0;}
By default, this depends on whether Std::async initiates a thread, or whether the task is synchronized when expected to wait. In most cases (it is estimated that this is the result you want), but you can also pass an extra parameter to Std::async before the function call. The type of this parameter is Std::launch, which can also be std::launch::d efered, which indicates that the function call is deferred until the wait () or get () function call is executed, Std::launch::async Indicates that the function must be executed on the separate thread on which it resides, Std::launch::d eferred | Std::launch::async indicates that implementations can choose one of these two approaches. The last option is the default. When a function call is delayed, it may not be running.
Baz function changes, easy to observe the effect:
X Baz (x& _x,int i) {_x.m=i;std::cout << _x.m<< "call" << Std::endl;return _x;}
Call:
Auto F7 = Std::async (Std::launch::async, Y (), 1.2); Execute std::cout on new thread << "f7\t" << f7.get () << Std::endl;auto F8 = Std::async (std::launch::d eferred, Baz, Std::ref (x), 2); Execute Auto F9 = Std::async at wait () or get () call (std::launch::d eferred | Std::launch::async,baz, Std::ref (x), 3); Implementation selection Execution mode std::cout << "f9\t" <<f9.get (). m << Std::endl;auto F10 = Std::async (Baz, Std::ref (x), 4); F8.wait (); Call the delay function, run in the background, and then block std::cout << "f8\t" << f8.get () If there is a result. M << std::endl;std::cout << "f10\ T "<< F10.get (). m << Std::endl;
This is not the only way to associate a std::future with a task instance; You can also wrap a task into an std::p ackaged_task<> instance, or by writing code, using std::p romise< The > type template displays the setting values. In contrast to std::p romise<>, std::p ackaged_task<> have higher levels of abstraction, so we start with a "high abstraction" template.
Tasks and expectations
CPP Concurrency in action (Reading note 3)-Synchronous concurrent operation