C. Implement the thread pool to follow.

Source: Internet
Author: User

C. Implement the thread pool to follow.

Introduction-_-Or the old routine

A long time ago I wrote a thread pool with a pursuit-> C implementation thread pool with a pursuit

This is a way of thinking and implementation. You can use it. Recently, we have developed the simplec framework in detail. We are going to release a formal version.

This thread pool is optimized. The optimization results are as follows.

1). More beautiful and reasonable api

2). pthread api Optimization

3). Further Precise Positioning Based on solving the problem.

4). added more security code

To put it bluntly, the thread pool does not have many application scenarios in the historical languages C and C ++. All applications that can be solved by threads can be solved by message queues.

Of course, a thread has a different attribute that is preemptible. When you pursue performance, this is basically not enough.

At least this is not necessary for preemptible tasks in modern game framework design.

The following sections allow me to describe the concept of scthreads. h.

# Ifndef _ H_SIMPLEC_SCTHREADS # define _ H_SIMPLEC_SCTHREADS # include <schead. h> /// this is the library of the thread pool. asynchronous cancellation is supported and some thread helper libraries are added // typedef struct threads * threads_t; // thread_run-enable a self-destroyed thread to run // run: the running subject // arg: run parameter // return: >= Success_Base indicates success // extern int thread_run (die_f run, void * arg ); /// threads_create-create a thread pool processing object // return: return the created thread pool object. NULL indicates failure. // extern threads_t threads_create (void); // thr Eads_delete-asynchronously destroys a thread pool object // pool: thread pool object // return: void // extern void threads_delete (threads_t pool ); //// threads_add-Add the task to be processed in the thread pool // pool: thread pool object // run: run execution question // arg: run parameter // return: void // extern void threads_add (threads_t pool, die_f run, void * arg); # endif //! _ H_SIMPLEC_SCTHREADS

As described above1). More beautiful and reasonable apiBecause the macro is used internally to determine the optimum number of threads, players do not need to specify the number themselves. Of course, this value is too small.

 

Preface-_-some appetizing snacks

Sometimes, when we use a pthread thread, the steps are a little tedious. In fact, we don't need to know what to do after this thread is executed.

I just want to execute a method asynchronously. Here I designed the thread_run function.

typedef void    (* die_f)(void * node);extern int async_run(die_f run, void * arg);

The detailed design routine is as follows:

# Include <pthread. h> // The running subject struct func {die_f run; void * arg ;}; // The entity static void * _ run (void * arg) executed by pthread in thread_run) {struct func * func = arg; func-> run (func-> arg); free (arg); return NULL ;} /// async-enable a self-destroyed thread to run // run: The running subject // arg: run parameter // return:> = Success_Base indicates success // int async (die_f run, void * arg) {pthread_t tid; pthread_attr_t attr; struct func * func = malloc (sizeof (struct func )); if (NULL = func) RETURN (Error_Alloc, "malloc sizeof (struct func) is error"); func-> run = run; func-> arg = arg; // build pthread to run pthread_attr_init (& attr); pthread_attr_setdetachstate (& attr, PTHREAD_CREATE_DETACHED); if (pthread_create (& tid, & attr, _ run, func) <0) {free (func); pthread_attr_destroy (& attr); RETURN (Error_Base, "pthread_create error run, arg = % p | % p. ", run, arg);} pthread_attr_destroy (& attr); return Success_Base ;}

Here, the first one is my common error enumeration.

//// Flag_e-enumeration returned by the basic action of the global operation, used to determine the status code of the returned value. //> = 0 indicates the Success status, <0 flag Error status // typedef enum {Success_Exist = + 2, // you want to exist. The setting already exists. success_Close = + 1, // when the file descriptor is read, The Success_Base = + 0 is returned after the read is complete. // The macro Error_Base =-1 is returned if the result is correct, // error base type. It is available for all errors. If it is unclear, Error_Param =-2. // The called parameter is Error_Alloc =-3, // Memory Allocation Error Error_Fd =-4, // file opening failure Error_TOUT =-5, // timeout error} flag_e;

The project is very useful in practice. Basically, the errors returned by a function are the same.

Point 2. When we use pthread_attr_init, posix Threads recommend that we immediately call pthread_attr_destroy.

Ensure that your own things are released by yourself. In fact, pthread _ * _ destroy functions only return the current thread status and do not involve resource destruction.

Let's talk about the third point. The easy-to-use RETURN macro is quite floating.

//// The console outputs The Complete Message prompt information, where fmt must be a "" wrapped string // CERR-> simple message print // CERR_EXIT-> output error message, check the current process // CERR_IF-> if statement. Exit if a standard error is met. // # ifndef _ H_CERR # define CERR (fmt ,...) \ fprintf (stderr, "[% s: % d] [errno % d: % s]" fmt "\ n", \ _ FILE __, _ func __, _ LINE __, errno, strerror (errno), ##__va_args _) # define CERR_EXIT (fmt ,...) \ CERR (fmt, ##__ VA_ARGS _), exit (EXIT_FAILURE) # define CERR_IF (code) \ if (code) <0) \ CERR_EXIT (# code) //// RETURN-print the error message and return the specified result. // val: return is required. When return void is required, enter ', 'or NIL // fmt: formatted string enclosed in double quotation marks //...: fmt parameter // return: val // # define NIL # define RETURN (val, fmt ,...) \ do {\ CERR (fmt, ##__ VA_ARGS _); \ return val ;\} while (0) # endif

# It is to solve the problem that there is only one parameter in the Variable Parameter (... For empty, there is no content, but the GCC compiler does not ).

NIL is used to solve the return void; Syntax is replaced by the syntax sugar such as RETURN (NIL.

Back to the question, the above functions actually reflect2). pthread api Optimization. Mainly reflected in my use

Pthread_attr_init (& attr); pthread_attr_setdetachstate (& attr, PTHREAD_CREATE_DETACHED );

To replace
Pthread_detach (pthread_self ());

The thread startup and running settings are moved to the initialization outside the thread, which means that the thread is created and owned.

This reduces the coupling of thread control code and accelerates the speed of thread business code.

Slow startup and fast running. or difficult to understand when you are not familiar with it, it is actually better to get along with others ~

 

Body-_-Detailed Design

First look at the core structure, each thread object

// Thread struct. Each thread has a semaphore, which triggers struct thread {struct thread * next at a specified point; // The next thread object bool wait; // true indicates that the current thread is suspended pthread_t tid; // current thread id pthread_cond_t cond; // thread condition variable };

The thread startup object is a linked list. wait indicates that the current thread is suspended and can be used to quickly activate the suspended thread.

// Locate the idle thread and return the starting semaphore static pthread_cond_t * _ threads_getcont (struct threads * pool) {struct thread * head = pool-> thrs; while (head) {if (head-> wait) return & head-> cond; head = head-> next;} return NULL ;}

Struct threads is the scheduling structure of all thread objects.

// Define thread pool (thread set) define struct threads {size_t size; // thread pool size, maximum number of thread struct size_t curr; // The total number of threads in the current thread pool size_t idle; // Number of Idle threads in the current thread pool pthread_mutex_t mutx; // thread mutex struct thread * thrs; // struct job * head of the thread struct object set; // The chain head of the thread task linked list, queue structure struct job * tail; // end of the table of the thread task queue, which is inserted and executed };

A job uses a queue structure. The thread linked list consumes the producer queue at the same time.

The above wait design reflects3). Further Precise Positioning Based on solving the problem.

 

For the fourth point4). added more security codeOur practice is embodied in the Property Control of The pthread thread.

// Set the thread attributes and set the thread to allow exit from the thread pthread_setcancelstate (PTHREAD_CANCEL_ENABLE, NULL); pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, NULL );

Sets the Enable thread cancellation status and supports asynchronous cancellation.

// Build the thread and obtain if (pool-> idle> 0) {pthread_cond_t * cond = _ threads_getcont (pool) directly after the build is complete; // release the lock first and then send a signal to the activation thread, the speed is fast, but the disadvantage is that the thread execution priority pthread_mutex_unlock (mutx) is lost; // sent to the idle thread. This semaphore must exist in pthread_cond_signal (cond); return ;}

Fixed-point transmission signals precisely solve the problem of group alarms.If you can change the time with space, do not waste it.

Let's talk about other surprise groups. For example, in the multi-process epoll, epoll-> accept only has one success and multiple failures.

There are also some solutions. The simplest is to ignore group errors, but the performance is somewhat affected. You can also process them through balanced round-robin file descriptors.

For more information about the thread pool, see the following source files and test files.

  Scthreads. h

  Scthreads. c

Test _Scthreads. c

Finally, the main reason for the change is that the previous version was too ugly to get used to it. I felt that beauty is good, and beauty is a pleasant feeling ~ _ PHI (°-° )/

What should we do for the sake of beauty ~

 

Postscript-_-keep more memories. Maybe you forget it.

The problem is inevitable. Only polishing and consideration are required ~

North Story http://music.163.com/#/song? Id = 37782112

  

  You envy me, And I envy your happy life

  

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.