C ++ task queue and Multithreading

Source: Internet
Author: User
ArticleDirectory
    • Abstract:
Abstract:

In many cases, C ++ is used because of the efficient performance of the native code compiled by C ++ and the excellent concurrency capability of C ++. Parallel methods can be divided into multi-process and multi-thread. This chapter only discusses multithreading. multi-process knowledge will be discussed in other chapters. Multithreading is used to develop C ++ serversProgramIt is very important to design, allocate threads, and communicate with threads according to requirements. In addition to improving program performance, if the design fails, the program may be complicated and chaotic, and become a breeding ground for bugs. Therefore, it is worthwhile to design and develop excellent thread components for reuse.

Thread-related APIs are not complex. However, both Linux and Windows are c-style interfaces. We only need to encapsulate them into objects for ease of use. The task queue is designed for inter-thread communication. The task queue is designed for inter-thread communication in some modes. The principle is not hard to understand. We need to make it clear, in what scenarios, you can select a mode.

Definition of task queue:

The task queue abstracts inter-thread communication and limits that tasks can only be transferred between threads, while related data and operations are saved by tasks. The term "task queue" may have other meanings in other scenarios. The task queue discussed here is defined: A thread-safe first-in-first-out queue that can pass tasks that encapsulate data and operations among multiple threads. Its relationship with threads is as follows:

 
 

Note: The two dashed boxes indicate the data boundaries that can be accessed by thread a and thread B respectively. Therefore, the task queue is the medium for inter-thread communication.

Task queue implementation: task definition

Producer and consumer models are extremely common in software design and are often used to decouple various components or systems. Distributed System interaction, network-layer object communication, and application-layer object communication are applied to the producer consumer model. In the task queue, the production and consumption objects are "tasks ". Here, the task is defined as a combination of data and operation objects, or simply understood as a structure containing void (void *) type function pointers and void * Data pointers. We define a task as a class task_t. Next we will analyze the implementation of task_t.

InsertCode:

 Class  Task_impl_ I {  Public  :  Virtual ~ Task_impl_ I (){} Virtual   Void Run () = 0  ;  Virtual Task_impl_ I * fork () = 0  ;};  Class Task_impl_t: Public  Task_impl_ I {  Public  : Task_impl_t (task_func_t func _,  Void * Arg _): m_func (func _), m_arg (Arg _){} Virtual   Void  Run () {m_func (m_arg );}  Virtual Task_impl_ I * Fork (){  Return   New  Task_impl_t (m_func, m_arg );}  Protected  : Task_func_t m_func;  Void * M_arg ;};  Struct  Task_t { Static   Void Dumy ( Void * ) {} Task_t (task_func_t F _,  Void * D _): task_impl (  New  Task_impl_t (F _, D _) {} task_t (task_impl_ I * Task_imp _): task_impl (task_imp _) {} task_t (  Const Task_t & SRC _): task_impl (SRC _. task_impl ->Fork () {} task_t () {task_impl = New Task_impl_t (& Task_t: dumy, null );} ~ Task_t () {Delete task_impl;} task_t & Operator = ( Const Task_t & SRC _) {Delete task_impl; task_impl = SRC _. task_impl-> Fork ();  Return * This  ;} Void  Run () {task_impl -> Run ();} task_impl_ I * Task_impl ;}; 

The most important interface of a task is run. It simply executes the save operation. The specific operation is saved in the base class of task_impl_ I. Because the object itself is a set of data plus operations, therefore, when constructing a subclass object of task_impl_ I, assign different data and operations to it. The combination method is used to separate interfaces from implementations. The advantage of this is that the application layer only needs to know the concept of task and does not need to know about task _ impl_ I. Because different operations and data may need to construct different task_impl_ I subclasses, we need to provide some generic functions that can easily convert all user operations and data into task objects. Task_binder_t provides a series of Gen functions that can convert users' common functions and data into task_t objects.

 Struct  Task_binder_t {  //  ! C Function          Static Task_t Gen (Void (* Func _)( Void *), Void * P _){  Return  Task_t (func _, P _);} Template <Typename RET> Static Task_t Gen (Ret (* func _)( Void  )){  Struct  Lambda_t {  Static   Void Task_func (Void * P _){( * (Ret (*)( Void  ) P _)();};};  Return Task_t (lambda_t: task_func ,( Void * ) Func _);} Template <Typename funct, typename arg1> Static  Task_t Gen (funct func _, arg1 arg1 _){  Struct Lambda_t: Public Task_impl_ I {funct dest_func; arg1 arg1; lambda_t (funct func _,  Const Arg1 & Arg1 _): dest_func (func _), arg1 (arg1 _){}  Virtual   Void  Run (){( * Dest_func) (arg1 );}  Virtual Task_impl_ I * Fork (){  Return   New Lambda_t (dest_func, arg1 );}};  Return Task_t ( New Lambda_t (func _, arg1 _));
Production Task

The function encapsulates the user's operation logic. When a thread executes a specific operation, it needs to convert the function corresponding to the operation to task_t and deliver it to the task queue corresponding to the target thread. Although the task queue is used to deliver messages to each other, it is basically a data exchange method that shares data. The main steps are as follows:

L convert user functions into task_t objects

L lock the task queue of the target thread and put task_t at the end of the task queue. When the queue is empty, the target thread will wait on the condition variable. In this case, signal needs to wake up the target thread.

The key code for implementation is as follows:

VoidProduce (ConstTask_t &Task _) {lock_guard_tLock(M_mutex );BoolNeed_sig =M_tasklist.empty (); m_tasklist.push_back (task _);If(Need_sig) {m_cond.signal ();}}
Consumption task

The thread of the consumption task will become a full task driver. This thread has only one responsibility to execute all tasks in the task queue. If the current task queue is empty, the thread will be blocked on the condition variable, when a new task arrives, the thread will be awakened again. The implementation code is as follows:

 Int Consume (task_t & Task _) {lock_guard_t  Lock (M_mutex );  While  (M_tasklist.empty ()){  If ( False = M_flag ){  Return - 1  ;} M_cond.wait ();} task _ = M_tasklist.front (); m_tasklist.pop_front ();  Return   0  ;} Int  Run () {task_t T;  While ( 0 = Consume (t) {T. Run ();}  Return   0  ;} 
Task queue mode: Single-threaded, single-task queue Mode

The task queue already provides the run interface. The thread bound to the task queue only needs to execute this function. This function will never return unless you call the close interface of the task queue. The close interface of the task queue is used to stop the work of the task queue. The Code is as follows:

VoidClose () {lock_guard_tLock(M_mutex); m_flag=False; M_cond.broadcast ();}

First, the disable flag is set, and then broadcast is executed on the condition variable. The run function of the task queue also exits. Looking back at the run interface code, you will find that the code that checks whether the task queue is closed (m_flag variable) is detected only when the task queue is empty, this ensures that the run function is returned only after all task queues are executed.

The following is an example of helloworld using the task queue:

 Class  Foo_t {  Public  :  Void Print ( Int  Data) {cout < "  Helloworld, data:  " <Data < "  Thread ID:  " <: Pthread_self () < Endl ;}  Void Print_callback ( Int Data, Void (* Callback _)( Int  ) {Callback _ (data );}  Static  Void Check ( Int  Data) {cout < "  Helloworld, data:  " <Data < "  Thread ID:  " <: Pthread_self () < Endl ;}};  //  Single-threaded single-task queue  Void Test_1 () {thread_t thread; task_queue_t TQ; thread. create_thread (task_binder_t: Gen ( & Task_queue_t: Run, & TQ ), 1  ); Foo_t Foo;  For ( Int I = 0 ; I < 100 ; ++ I) {cout < "  Helloworld, thread ID:  " <: Pthread_self () <Endl; TQ. Produce (task_binder_t: Gen ( & Foo_t: print ,& Foo, I); sleep (  1  );} Thread. Join ();}  Int Main ( Int Argc, Char * Argv []) {test_1 ();  Return   0  ;} 

In this example, the single-threaded single-task queue mode is used. Because only one thread is bound to the task queue, the task is executed in strict first-in-first-out mode. The advantage is that it can ensure the orderliness of logical operations, so it is the most common.

Multi-thread multi-task queue

If you want to use more threads, make sure that each task queue is bound to a single thread while creating more threads. You can run different task queues in parallel.

This mode is applicable to the following situations:

L for example, in online games, databases generally create connection pools. user operation databases are completed by database thread pools and the results are delivered to the logic layer. The addition, deletion, modification, and query operations on each user's data must be ordered. Therefore, each user is bound to a fixed task queue. Data modifications of different users do not affect each other. Different users can assign different task queues.

L for example, the reading and writing of multiple sockets in the network layer do not affect each other. You can create two or more threads, each of which corresponds to a task queue, different socket operations can be randomly assigned to a task queue (note that the allocation is random. Once allocated, all operations on a single socket will be completed by this task queue to ensure logical orderliness ).

Sample Code:

 //  ! Multi-thread multi-task queue  Void  Test_2 () {thread_t thread; task_queue_t TQ [  3  ];  For (Unsigned Int I = 0 ; I <Sizeof (TQ )/ Sizeof (Task_queue_t); ++ I) {thread. create_thread (task_binder_t: Gen ( & Task_queue_t: Run, & (TQ [I]), 1  );} Foo_t Foo; cout < "  Helloworld, thread ID:  " <: Pthread_self () < Endl;  For (Unsigned Int J = 0 ; J <100 ; ++ J) {TQ [J % ( Sizeof (TQ )/ Sizeof (Task_queue_t)]. Produce (task_binder_t: Gen (& foo_t: print ,& Foo, j); sleep (  1  );} Thread. Join ();} 
Multi-thread single-task queue

Sometimes logical operations may not need to be fully ordered, but must be executed as quickly as possible. As long as there is a idle thread, the task will be delivered to the idle thread for immediate execution. If the sequence does not affect the results, this mode will be more efficient. This mode may be used in the following situations:

L for example, if the friends in social game are obtained from the platform API and need to communicate via HTTP protocol, the thread will be blocked if the HTTP Library such as curl is used for synchronous communication, this is a multi-thread, single-queue method. After a request is sent to the task queue, as long as there are Idle threads for immediate execution, user a first reaches the task queue than user B, however, it cannot be guaranteed that a will obtain a friend list before B. If a has 2 k friends and B has only 2, B may request more quickly.

 // ! Multi-thread single task queue  Void  Test_3 () {thread_t thread; task_queue_t TQ; thread. create_thread (task_binder_t: Gen ( & Task_queue_t: Run, & TQ ), 3  ); Foo_t Foo; cout < "  Helloworld, thread ID:  " <: Pthread_self () < Endl;  For (Unsigned Int J = 0 ; J <100 ; ++ J) {TQ. Produce (task_binder_t: Gen ( & Foo_t: print ,& Foo, j); sleep (  1  );} Thread. Join ();} 
High-order usage of task queue asynchronous callback

In the task queue mode, all the examples are single-thread communication. Thread a delivers the request to B, but after B completes execution, A does not check the result. In practice, the execution results are usually processed or delivered to another task queue. Asynchronous callback can solve this problem well. The principle is that when a task is shipped, it also contains a function that checks the task execution result. Sample Code:

 //  ! Asynchronous callback  Void  Test_4 () {thread_t thread; task_queue_t TQ; thread. create_thread (task_binder_t: Gen ( & Task_queue_t: Run, & TQ ), 1 ); Foo_t Foo; cout < "  Helloworld, thread ID:  " <: Pthread_self () < Endl;  For (Unsigned Int J = 0 ; J < 100 ; ++ J) {TQ. Produce (task_binder_t: Gen ( & Foo_t: print_callback, & Foo, J ,& Foo_t: Check); sleep (  1 );} Thread. Join ();} 

Asynchronization is an important means of performance optimization. asynchronous can be used in the following scenarios:

L The server program requires high real-time performance. Almost the logic layer does not execute I/O operations. I/O operations are sent back to the logic layer through callback after being successfully executed by the I/O thread in the task queue.

L for online game user login, You need to load user data from the database. The database layer does not need to know how the logic layer processes user data. When an interface is called, you must input a callback function, the database layer directly calls the callback function after loading data, and the data is used as a parameter.

Implicit task queue

The multi-thread design can be decoupled using task queues. Better use is to encapsulate it after the interface. In the preceding example, the task queue object is displayed. However, this limits the user's need to know which task queue to bind an interface to, especially the multi-thread multi-task queue example, if the user needs to know which task queue the socket corresponds to when operating the socket interface, it is not elegant enough. The socket itself can store the reference of the corresponding task queue, so that the user only needs to call the socket interface, and the interface then delivers the request to the obtained task queue. Sample Code:

VoidSocket_impl_t: async_send (Const String&MSG _) {TQ. Produce (task_binder_t: Gen (& Socket_impl_t: Send ,&This, MSG _));}VoidSocket_impl_t: Send (Const String&MSG _){//Do send code}
Summary:

L when designing a multi-threaded program, it is often necessary to design and use the task queue component. This section describes the implementation of multiple multi-threaded modes, which are easy to understand.

L asynchronous callback is very common in multi-threaded programs. Asynchronization is often used to improve performance and system throughput, but Asynchronization will inevitably lead to complexity. Therefore, it is possible to ensure that the asynchronous steps are simple.

L The internal encapsulation of the object interface in the task queue is better. The user calls the interface directly, as if there is no task queue, so that he can run it silently in invisible places.

L The task queue designed in this section is thread-safe, and the delivered tasks can be ensured when the task is closed.

code: http://code.google.com/p/ffown/source/browse/trunk/#trunk%2Ffflib%2Finclude

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.