C++11 thread pool and flexible functional + bind + LAMDA

Source: Internet
Author: User
Tags mutex

Using the boost thread to implement a thread class, maintain a task queue so that it can host very flexible calls. This thread class can easily lay the groundwork for the subsequent thread pool. thread pool or dynamic equalization, nothing else. Because MinGW 4.7 to c++11 thread does not support, so use boost instead, Linux is supported, but the name space is different, the routines are the same. Code First: [CPP] View plaincopy #include #include <boost/thread/thread.hpp> #include <boost/thread/mutex.hpp> # Include #include #include #include #include #include #include #include//this class defines a class contains a thread, a T Ask Queue class Cpp11_thread {Public:cpp11_thread (): M_b_is_finish (False), M_pthread (nullptr) {} ~cpp11_thread () {if ( M_pthread! = nullptr) Delete m_pthread; M_list_tasks.clear (); } public://wait until this thread is terminated; void Join () {terminate (); if (m_pthread!=nullptr) M_pthread->join ();}//wait until this thread has no tasks pending. void Wait_for_idle () {while (load ()) Boost::this_thread::sleep (boost::p osix_time::milliseconds);//set the Mask to termminiate void terminate () {m_b_is_finish = true; M_cond_incoming_task.notify_one ();}//return the current load of th is thRead size_t load () {size_t sz = 0; M_list_tasks_mutex.lock (); sz = M_list_tasks.size (); M_list_tasks_mutex.unlock (); Retu RN sz; }//append a task to do size_t Append (std::function< void (void) > Func) {if (m_pthread==nullptr) M_pthread = new B Oost::thread (Std::bind (&cpp11_thread::run,this)); size_t sz = 0; M_list_tasks_mutex.lock (); M_list_tasks.push_back (func); SZ = M_list_tasks.size (); If there were no tasks before, we should notidy the thread to do next job. if (sz==1) M_cond_incoming_task.notify_one (); M_list_tasks_mutex.unlock (); return sz; } protected:std::atomic< bool> m_b_is_finish; atomic bool Var to mark the thread the next loop would be terminated. std::list<std::function< void (void) > > m_list_tasks; The Task List contains function objects Boost::mutex M_list_tasks_mutex; The mutex with which we protect task List Boost::thread *m_pthread; Inside the thread, a task queue would be maintained. Boost::mutex M_cond_mutex; Condition MutexUsed by M_cond_locker boost::condition_variable M_cond_incoming_task; Condition var with which we notify the thread for incoming tasks protected:void run () {//Loop wait while (!m_b_is_fin ish) {std::function< void (void) > curr_task; bool bhastasks = false; M_list_tasks_mutex.lock (); if (M_LIST_TASKS.E Mpty () ==false) {bhastasks = true; curr_task = *m_list_tasks.begin ();} m_list_tasks_mutex.unlock (); Doing task if (bhastasks) {curr_task (); M_list_tasks_mutex.lock (); M_list_tasks.pop_front (); M_list_tasks_ Mutex.unlock (); } if (!load ()) {boost::unique_lock< boost::mutex> m_cond_locker (M_COND_MUTEX); Boost::system_time Const timeout= Boost::get_system_time () + boost::p osix_time::milliseconds (5000); if (M_cond_locker.mutex ()) m_cond_incoming_task.timed_wait (m_cond_locker,timeout);//m_cond_incoming_task.wait (m_ Cond_locker); } } } }; The thread pool class class Cpp11_thread_pool {public:cpp11_thread_pool (int nthreads): m_n_threads (nthreads) {assert ( Nthreads>0 && nthreads<=512); for (int i = 0; i< nthreads; i++) M_vec_threads.push_back (std::shared_ptr<cpp11_thread> (New Cpp11_thread ())); } ~cpp11_thread_pool () {} public://total threads; size_t count () {return m_vec_threads.size ();}//wait until all threads are terminated; void Join () {For_each (M_vec_threads.begin (), M_vec_threads.end (), [This] (Std::shared_ptr<cpp11_thread> & Item) {item->terminate (); Item->join ();}); }//wait until this thread has no tasks pending. void Wait_for_idle () {int n_tasks = 0; do {if (n_tasks) boost::this_thread::sleep (boost::p osix_time::milliseconds (200)) ; n_tasks = 0; For_each (M_vec_threads.begin (), M_vec_threads.end (), [This,&n_tasks] (std::shared_ptr<cpp11_thread> & Item) {N_tasks + = Item->load ();}); }while (N_tasks); }//set the mask to termminiate void terminate () {For_each (M_vec_threads.begin (), M_vec_threads.end (), [This] (std:: Shared_ptr<cpp11_thread> & Item) {Item->terminate ();}); }//return thE current load of this thread size_t load (int n) {return (N>=m_vec_threads.size ()))? 0:m_vec_threads[n]->load ();}// Append a task to does void Append (std::function< void (void) > Func) {int nidx =-1; unsigned int nminload =-1; for (unsigned int i=0;i<m_n_threads;i++) {if (nminload> m_vec_threads[i]->load ()) {nminload = M_vec_threads[i]->load (); nidx = i;}} ASSERT (nidx>=0 && nidx<m_n_threads); M_vec_threads[nidx]->append (func); } protected://no. threads int m_n_threads; Vector contains all the threads std::vector<std::shared_ptr<cpp11_thread> > m_vec_threads; }; A function which'll be executed in sub thread. void Hello () {//sleep for a while boost::this_thread::sleep (boost::p osix_time::milliseconds (rand ()%900+100)); std:: cout << "Hello World, I ' m a function runing in a thread!" << Std::endl; }//a class has a method, which'll be is called in a thread different from the main thread. Class A {private:int m_n; public:a (int n): M_n (n) {} ~a () {} public:void foo (int k) {//sleep for A while Boost::this_thread::sleep (boost::p Osix_ti Me::milliseconds (rand ()%900+100)); Std::cout << "n*k =" <<k*m_n<<std::endl; m_n++; } }; Let ' s test the thread. int main () {Cpp11_thread_pool thread (2); Srand ((unsigned int) time (0)); A (1), B (2), C (3); int nsleep = rand ()%900+100; Append a simple function task Thread.append (&hello); Append Lamda thread.append ([&nsleep] () {boost::this_thread::sleep (boost::p osix_time::milliseconds (Nsleep)); std::cout<< "I ' m a lamda runing in a thread" <<std::endl; } ); Append object method with Copy-constructor (value-assignment) thread.append (Std::bind (&a::foo,a,10)); Thread.append (Std::bind (&a::foo,b,11)); Thread.append (Std::bind (&a::foo,c,12)); Thread.append (Std::bind (&a::foo,a,100)); Append object method with address assignment, would cause the objects ' member increase. Thread.append (Std::bind (&a::foo,&a,10)); ThRead.append (Std::bind (&a::foo,&b,11)); Thread.append (Std::bind (&a::foo,&c,12)); Thread.append (Std::bind (&a::foo,&a,100)); Wait for all tasks do. Thread.wait_for_idle (); Kill Thread.terminate (); Wait for killed Thread.Join (); Test function std::function < void (void) > func1 = &hello; Std::function < void (void) > Func2 = &hello; if (Func1.target ()!=func2.target ()) return 0; else return 1; Program output: [plain] View plaincopy Hello world, I ' m a function runing in a thread! I ' m a lamda runing in a thread n*k = N*k = Ten N*k = N*k = + N*k = N*k = Ten N*k = approx. N*k = $ Process returned 0 (0x0) execution time:2.891 s Press any key to continue. Let's look at the code below. The first is the thread class. The 第13-99 line is a thread class. This class implements a threading model with a task queue. Key components are 62 rows of std::list<std::function< void > > M_list_tasks; , this FIFO is used to host tasks that run sequentially on a child thread. The task is appended by the 48-line Append method, and 64 rows M_list_tasks_mutex is a mutex to protect the queue's entry and exit. The thread object defined by line 65 Boost::thread is initialized and run in the constructor, binding the Run method of this object. The key to running a thread is the Run method, defined in 71-99 rows. This method first determines if there is a penDing task, some words will pop up to execute. If the task is finished, wait is done with the condition variable M_cond_incoming_task defined by 67 rows until the new task arrives, and on line 56th, the queue is activated. The thread class also provides methods such as load () to return the size of the queue, and terminate to terminate the thread. Then, go to the thread pool. The thread pool uses the simplest strategy, which is to allocate directly to the most idle threads. With the above package, the main function is much simpler. You can append almost everything to the thread pool, which is hard to do before using virtual function.

Thread pool under c++11 and flexible functional + bind + LAMDA

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.