Asynchronous Io, APC, Io completion port, thread pool, and high-performance server (4) thread pool

Source: Internet
Author: User
Tags apc

Thread Pool

The following is an excerpt from msdn thread pooling.
Threads created by many applications spend a lot of time in sleep to wait for an event. Some threads are regularly woken up after they enter the sleep state to change or update status information in polling mode. The thread pool allows you to use threads more effectively. It provides a worker thread pool managed by the system for your applications. There should be at least one thread to listen for all the pending operations placed in the thread pool. After the wait operation is completed, a worker thread will be in the thread pool to execute the corresponding callback function.
You can also put a work project that is not waiting for operation in the thread pool, use the queueuserworkitem function to complete the work, and pass the work project function to the thread pool through a parameter. After a work item is put in the thread pool, it cannot be canceled.
Timer-queue timers and registered wait operations are also implemented using thread pools. Their callback functions are also placed in the thread pool. You can also use the bindiocompletioncallback function to deliver an asynchronous Io operation. On the IO completion port, the callback function is also executed by the thread pool thread.
When the queueuserworkitem function or bindiocompletioncallback function is called for the first time, the thread pool is automatically created, or when timer-queue timers or registered wait operations are placed in the callback function, the thread pool can also be created. The number of threads that can be created by the thread pool is limited by the available memory. Each thread uses the default initial stack size and runs on the default priority.
There are two types of threads in the thread pool: Io thread and non-io thread. The IO thread is waiting in the alarm status, and the work item is put into the IO thread as the APC. If your work project requires thread execution in the warning state, you should put it in the IO thread.
Non-I/O worker threads are waiting on the I/O completion port. Using Non-I/O threads is more efficient than I/O threads. That is to say, if possible, try to use non-I/O threads. Io threads and non-io threads do not exit until asynchronous Io operations are completed. However, it may take a long time for a non-I/O thread to initiate a request.
The correct method to use the thread pool is that the work project function and all functions it will call must be thread pool safe. Secure functions should not assume that the thread is a one-time thread or a permanent thread. Generally, asynchronous Io calls that require permanent threads should be avoided, such as the regpolicychangekeyvalue function. To execute such a function in a permanent thread, you can pass the wt_executeinpersistentthread option to queueuserworkitem.
Note that the thread pool is not compatible with the single-thread Suite (STA) model of COM.

To better explain the superiority of the thread pool implemented by the operating system, we first try to implement a simple thread pool model.

The Code is as follows:

/*************************************** *********************************/
/* Test our own thread pool .*/
/*************************************** *********************************/

Typedef struct _ thread_pool
{
Handle quitevent;
Handle workitemsemaphore;

Long workitemcount;
List_entry workitemheader;
Critical_section workitemlock;

Long threadnum;
Handle * threadsarray;

} Thread_pool, * pthread_pool;

Typedef void (* work_item_proc) (pvoid PARAM );

Typedef struct _ work_item
{
List_entry list;

Work_item_proc userproc;
Pvoid userparam;

} Work_item, * pwork_item;

DWORD winapi workerthread (pvoid pparam)
{
Pthread_pool pthreadpool = (pthread_pool) pparam;
Handle events [2];

Events [0] = pthreadpool-> quitevent;
Events [1] = pthreadpool-> workitemsemaphore;

For (;;)
{
DWORD dwret = waitformultipleobjects (2, events, false, infinite );

If (dwret = wait_object_0)
Break;

//
// Execute user's Proc.
//

Else if (dwret = wait_object_0 + 1)
{
Pwork_item pworkitem;
Plist_entry plist;

Entercriticalsection (& pthreadpool-> workitemlock );
_ Assert (! Islistempty (& pthreadpool-> workitemheader ));
Plist = removeheadlist (& pthreadpool-> workitemheader );
Leavecriticalsection (& pthreadpool-> workitemlock );

Pworkitem = containing_record (plist, work_item, list );
Pworkitem-> userproc (pworkitem-> userparam );

Interlockeddecrement (& pthreadpool-> workitemcount );
Free (pworkitem );
}

Else
{
_ Assert (0 );
Break;
}
}

Return 0;
}

Bool initializethreadpool (pthread_pool pthreadpool, long threadnum)
{
Pthreadpool-> quitevent = createevent (null, true, false, null );
Pthreadpool-> workitemsemaphore = createsemaphore (null, 0, 0x7fffffff, null );
Pthreadpool-> workitemcount = 0;
Initializelisthead (& pthreadpool-> workitemheader );
Initializecriticalsection (& pthreadpool-> workitemlock );
Pthreadpool-> threadnum = threadnum;
Pthreadpool-> threadsarray = (handle *) malloc (sizeof (handle) * threadnum );

For (INT I = 0; I <threadnum; I ++)
{
Pthreadpool-> threadsarray [I] = createthread (null, 0, workerthread, pthreadpool, 0, null );
}

Return true;
}

Void destroythreadpool (pthread_pool pthreadpool)
{
Setevent (pthreadpool-> quitevent );

For (INT I = 0; I <pthreadpool-> threadnum; I ++)
{
Waitforsingleobject (pthreadpool-> threadsarray [I], infinite );
Closehandle (pthreadpool-> threadsarray [I]);
}

Free (pthreadpool-> threadsarray );

Closehandle (pthreadpool-> quitevent );
Closehandle (pthreadpool-> workitemsemaphore );
Deletecriticalsection (& pthreadpool-> workitemlock );

While (! Islistempty (& pthreadpool-> workitemheader ))
{
Pwork_item pworkitem;
Plist_entry plist;

Plist = removeheadlist (& pthreadpool-> workitemheader );
Pworkitem = containing_record (plist, work_item, list );

Free (pworkitem );
}
}

Bool postworkitem (pthread_pool pthreadpool, work_item_proc userproc, pvoid userparam)
{
Pwork_item pworkitem = (pwork_item) malloc (sizeof (work_item ));
If (pworkitem = NULL)
Return false;

Pworkitem-> userproc = userproc;
Pworkitem-> userparam = userparam;

Entercriticalsection (& pthreadpool-> workitemlock );
Inserttaillist (& pthreadpool-> workitemheader, & pworkitem-> list );
Leavecriticalsection (& pthreadpool-> workitemlock );

Interlockedincrement (& pthreadpool-> workitemcount );

Releasesemaphore (pthreadpool-> workitemsemaphore, 1, null );

Return true;
}

Void userproc1 (pvoid dwparam)
{
Workitem (dwparam );
}

Void testsimplethreadpool (bool bwaitmode, long threadnum)
{
Thread_pool threadpool;
Initializethreadpool (& threadpool, threadnum );

Completeevent = createevent (null, false, false, null );
Begintime = gettickcount ();
Itemcount = 20;

For (INT I = 0; I <20; I ++)
{
Postworkitem (& threadpool, userproc1, (pvoid) bwaitmode );
}

Waitforsingleobject (completeevent, infinite );
Closehandle (completeevent );

Destroythreadpool (& threadpool );
}
We put the work item in a queue and use a semaphore to notify the thread pool. Any thread in the thread pool fetches the work item for execution. After the execution is complete, the thread returns to the thread pool, wait for a new work item.
The number of threads in the thread pool is fixed. Pre-created and permanent threads are destroyed until the thread pool is destroyed.
Threads in the thread pool have equal and Random Access to work projects, and there is no special way to ensure that a thread has special priority in obtaining work project opportunities.
Moreover, the number of threads that can run concurrently at the same time is not limited. In fact, in our demo code for executing computing tasks, all threads are concurrently executed.
Next, let's take a look at how the thread pool provided by the system operates to complete the same task.

/*************************************** *********************************/
/* Queueworkitem test .*/
/*************************************** *********************************/

DWORD begintime;
Long itemcount;
Handle completeevent;

Int compute ()
{
Srand (begintime );

For (INT I = 0; I <20*1000*1000; I ++)
Rand ();

Return rand ();
}

DWORD winapi workitem (lpvoid lpparameter)
{
Bool bwaitmode = (bool) lpparameter;

If (bwaitmode)
Sleep (1000 );
Else
Compute ();

If (interlockeddecrement (& itemcount) = 0)
{
Printf ("time Total % d Second./N", gettickcount ()-begintime );
Setevent (completeevent );
}

Return 0;
}

Void testworkitem (bool bwaitmode, DWORD flag)
{
Completeevent = createevent (null, false, false, null );
Begintime = gettickcount ();
Itemcount = 20;

For (INT I = 0; I <20; I ++)
{
Queueuserworkitem (workitem, (pvoid) bwaitmode, flag );
}

Waitforsingleobject (completeevent, infinite );
Closehandle (completeevent );
}
Very simple, right? We only need to focus on our callback functions. However, compared with our simple simulation, the thread pool provided by the system has more advantages.
First, the number of threads in the thread pool is dynamically adjusted. Second, the thread pool uses the IO to complete the port feature, which can limit the number of concurrent running threads. By default, it will limit the number of CPUs, which can reduce thread switching. It selects the threads that have been executed recently and puts them into execution again, thus avoiding unnecessary thread switching.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.