Implementation of semi-synchronous semi-asynchronous thread pool (c++11)

Source: Internet
Author: User

Brief introduction

When dealing with a large number of concurrent tasks, one request corresponds to one thread to handle the task, and the creation and destruction of threads consumes excessive system resources and increases the cost of context switching. Thread pooling technology by pre-creating a certain number of threads in the system (usually the same as the cpu number of cores), when a task arrives, a thread is allocated from the thread pool for processing, and the thread is not destroyed after the task has been processed and waits for reuse.

The thread pool includes two implementations of semi-synchronous semi-async and leader follower. The thread pool consists of three parts, and the first layer is the synchronization service layer, which handles task requests from the upper level. The second layer is the synchronization queue layer, and the tasks in the Synchronization service layer are added to the queue. The third layer is the asynchronous service layer, where multiple threads process tasks in the queue at the same time.

Put the code on first, then explain it individually.

Sample code

Implementation code for the synchronization queue:

//ThreadPool.cpp: Defines the entry point of the console application. //#ifndef _synaqueue_h_#define _SYNAQUEUE_H_#include <list>#include <mutex>#include <thread>#include <condition_variable>#include <iostream>using namespace STD;Template<TypeNameT>classsynaqueue{ Public: Synaqueue (intMaxSize): M_maxsize (MaxSize), M_needstop (false){}voidPut (Constt& x) {Add (x); }voidPut (t&& x) {ADD (forward<t> (x));//Perfect forwarding, does not change the type of the parameter}voidTake ( list<T>&List) {unique_lock<mutex> locker (M_mutex);//judgment, when either condition is not met, the condition variable frees the mutex and places the thread in the waiting state, waiting for the other thread to call Notify_one/all to wake it.         //Continue execution when one of the conditions is met, remove the task from the queue, wake up the thread waiting to add the task        //When a thread in the waiting state is awakened, obtain the mutex, check whether the condition is met, satisfy-continue execution, or release the mutex to continue waitingM_notempty.wait (Locker, [ This]{returnM_needstop | | Notempty (); });if(M_needstop)return;List= Move (M_queue);    M_notfull.notify_one (); }voidTake (t& T) {unique_lock<mutex> locker (M_mutex);//LockM_notempty.wait (Locker, [ This]{returnM_needstop | | Notempty (); });if(M_needstop)return;        t = M_queue.front ();        M_queue.pop_front ();    M_notfull.notify_one (); }voidStop () {{lock_guard<mutex> locker (M_mutex); M_needstop =true; } m_notfull.notify_all ();//Wake up all waiting threads, wake up process check m_needstop, true, all threads exit executionM_notempty.notify_all (); }BOOLEmpty () {lock_guard<mutex> locker (M_mutex);returnM_queue.empty (); }BOOLFull () {lock_guard<mutex> locker (M_mutex);returnM_queue.size () = = M_maxsize; } size_t Size () {lock_guard<mutex> locker (M_mutex);returnM_queue.size (); }intCount () {returnM_queue.size (); }Private:BOOLNotfull ()Const{BOOLFull = M_queue.size () >= m_maxsize;if(full)cout<<"The buffer is full and needs to wait ...." "<< Endl;return!full; }BOOLNotempty ()Const{BOOLempty = M_queue.empty ();if(empty)cout<<"Buffer empty, need to wait, ... Async layer Thread: "<< this_thread::get_id () << Endl;return!empty; }Template<TypeNameF>voidADD (f&& x) {unique_lock<mutex> locker (M_mutex);//Get write lock via M_mutexM_notfull.wait (Locker, [ This]{returnM_needstop | | Notfull (); });//Do not need to stop and full, release M_mutex and waiting; one for the real.        if(M_needstop)return;        M_queue.push_back (forward<f> (x));    M_notempty.notify_one (); }Private: list<T>M_queue;//BufferMutex M_mutex;//Mutex amountCondition_variable M_notempty;//Condition variableCondition_variable M_notfull;intM_maxsize;//sync queue max size    BOOLM_needstop;//Stop identification};#endif

Implementation code for the thread pool:

#ifndef _threadpool_h_#define _THREADPOOL_H_#include "synaqueue.h"#include <list>#include <thread>#include <functional>#include <memory>#include <atomic>Const intMaxtaskcount = -;classthreadpool{ Public:usingTask = function <void() >; ThreadPool (intNumthread = Thread::hardware_concurrency ()): M_queue (maxtaskcount) {Start (numthread);} ~threadpool () {Stop ();}voidStop () {call_once (M_flag, [ This]{stopthreadgroup ();}); }voidAddTask (task&& Task) {m_queue.    Put (forward<task> (Task)); }voidAddTask (Consttask& Task) {M_queue.    Put (Task); }Private:voidStart (intnumthreads) {m_running =true;//Create thread group         for(inti =0; i < numthreads; i++) {m_threadgroup.push_back (make_shared<thread> (&threadpool::runinthread, This)); }    }//Each thread executes this function    voidRuninthread () { while(m_running) {//Fetch tasks to execute separately             list<Task> List; M_queue. Take (List); for(Auto& Task:List)            {if(!m_running)return;            Task (); }        }    }voidStopthreadgroup () {m_queue. Stop ();//thread stops in the synchronization queueM_running =false;//Let internal threads jump out of the loop and eject         for(AutoThread:m_threadgroup) {if(thread) thread->join ();    } m_threadgroup.clear (); }Private: list<shared_ptr<thread>> m_threadgroup;//The thread group that handles the task, and a shared pointer to the thread is stored in the listSynaqueue<task> M_queue;//Sync queueAtomic_bool m_running;//Whether to stop the identityOnce_flag M_flag;};#endif

Unit Test Code:

//ThreadPool.cpp: Defines the entry point of the console application. //#include "stdafx.h"#include "threadpool.h"voidTestthreadpool () {ThreadPool pool (2); Thread THD1 ([&pool]{ for(inti =0; I <Ten; i++) {AutoThrid = this_thread::get_id (); Pool. AddTask ([Thrid, i]{cout<<"thread ID for synchronization layer thread 1:"<< Thrid <<"This is the mission."<< i << Endl; This_thread::sleep_for (Chrono::seconds (2));        });    }    }); Thread THD2 ([&pool]{ for(inti =0; I <Ten; i++) {AutoThrid = this_thread::get_id (); Pool. AddTask ([Thrid, i]{cout<<"thread ID for synchronization layer thread 2:"<< Thrid <<"This is the mission."<< i << Endl; This_thread::sleep_for (Chrono::seconds (2));        });    }    }); This_thread::sleep_for (Chrono::seconds ( $)); Pool.    Stop ();    Thd1.join (); Thd2.join ();}intMain () {Testthreadpool ();}
Synchronization queue

The data members of the synchronization queue are looked at first, and the member functions are done on this basis:

private:    list<T>//缓冲区    mutex m_mutex;  // 互斥量    // 条件变量    condition_variable m_notFull;    int//同步队列最大的size    bool// 停止标识

list<T> m_queueThe buffer queue holds the task, and access to it requires the use of a mutex m_mutex and two condition variables m_notEmpty m_notFull . This is the same as the producer consumer in the operating system. m_needStopused to identify whether the thread pool needs to be stopped.

Next look at adding tasks to the synchronization queue:

template<typename F>    void Add(F&& x)    {        // 通过m_mutex获得写锁        // 不需要停止且满了,就释放m_mutex并waiting;有一个为真就继续执行        m_notFull.wait(locker, [this]{return m_needStop || NotFull(); });         if (m_needStop)            return;        m_queue.push_back(forward<F>(x));        m_notEmpty.notify_one();    }

unique_lockA write lock is implemented, and a shared_lock read lock is implemented. The function of the condition variable wait , check whether the second argument is true, if true, then continue execution, if false, then release and put the mutex current thread into waiting state, waiting for other threads to m_notFull.notify_one() wake up. Next, check that the end identity is set. Adds a task to the queue and notifies one of the threads that are waiting because the queue task is empty to continue execution.

Next look at how to remove the task, and add the task, no longer detailed description:

void Take(T& t)    {        // 锁        m_notEmpty.wait(locker, [this]{return m_needStop || NotEmpty(); });        if (m_needStop)            return;        t = m_queue.front();        m_queue.pop_front();        m_notFull.notify_one();    }

To stop the thread pool:

void Stop()    {        {            lock_guard<mutex> locker(m_mutex);            true;        }        // 将所有等待的线程全部唤醒,被唤醒的进程检查m_needStop,为真,所有的线程退出执行        m_notEmpty.notify_all();    }

For a data member to be mutually exclusive, to obtain the lock first, the lock_guard lock is freed in the destructor, so a separate space is used. Then all the threads waiting to add and remove tasks wake up and let them terminate themselves.

Thread pool

Data members:

private:    list<shared_ptr<thread>// 处理任务的线程组, 链表中存储着指向线程的共享指针    //同步队列    // 是否停止的标识    once_flag m_flag;

once_flagand call_once used to guarantee that the function is called only once in multithreading.

Implementation of semi-synchronous semi-asynchronous thread pool (c++11)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.