Use of thread pool And thread pool (Based on. net platform)

Source: Internet
Author: User
Multithreading can improve the efficiency of applications. But is it the most efficient? Do you think multithreading is complicated?

The previous learning thread knows that CreateThread is required to create a thread with multiple threads, and the thread must be closed. In addition, multiple threads sometimes need to synchronize resources, that is, events, beacon, and mutex objects are required.

Of course, threads and processes have great advantages in terms of speed, access to resources, and security. However, creating and destroying threads is not free of charge.

To create a thread, You need to allocate and initialize a kernel object and allocate and initialize the stack space of the thread. In addition, Windows sends a DLL_THREAD_ATTACH notification to each DLL in the process, allocate pages in the disk to memory to execute code. When the thread ends, a DLL_THREAD_DETACH notification is sent to each DLL. The stack space of the thread is released and the kernel object is released (if the number of threads reaches 0 ). Therefore, many of the overhead related to thread creation and destruction are irrelevant to the work originally executed by the creation thread.

To improve efficiency, Windows provides the thread pool concept.

The thread pool makes it easier to create, manage, and revoke threads.

I think the thread pool can be:
* Initialize the thread and dynamically create the thread.
* Pre-allocated thread pool memory space
* Priority queue
* Manage the thread and cancel the thread.

In the "CLR thread pool" article, Jeffer Richter describes the features of threads in CLR:
During CLR initialization, the thread pool does not contain threads. When an application needs to create a thread to execute the task, the application should request the thread pool thread to execute the task. An initial thread will be created after the thread pool is known. The initialization of the new thread is the same as that of other threads. However, after the task is completed, the thread will not be destroyed by itself. Instead, it returns the thread pool in the suspended state. If the application sends a request to the thread pool again, the suspended thread activates and executes the task without creating a new thread. This saves a lot of expenses. As long as the queuing speed of application tasks in the thread pool is lower than the speed at which one thread processes each task, the same thread can be reused repeatedly to save a lot of overhead during the lifetime of the application.

Then, if the application task queuing speed in the thread pool exceeds the speed at which one thread processes the task, an additional thread will be created in the thread pool. Of course, creating a new thread will produce additional overhead, but during its lifetime, the application may only request several threads to process all the tasks assigned to it. Therefore, in general, the performance of applications can be improved by using the thread pool.

Now you may want to know what will happen if the thread pool contains many threads and the application's workload is decreasing. In this case, the thread pool contains several threads that have been suspended for a long time, wasting operating system resources. Microsoft has also considered this issue. When the thread pool thread hangs, it waits for 40 seconds. If the thread has nothing to do after 40 seconds, the thread will be activated and destroyed by itself, releasing all the operating system resources (stacks, kernel objects, and so on) it uses ). At the same time, activation and self-destruction of threads may not affect the performance of the application, because the application does not do much, otherwise it will resume execution of this thread. By the way, although the threads in the thread pool are activated by themselves within 40 seconds, the actual time is not verified and can be changed.

A wonderful feature of the thread pool is that it is heuristic. If your application needs to execute many tasks, more threads will be created in the thread pool. If the workload of your application is gradually reduced, the thread pool thread will terminate on its own. The thread pool algorithm ensures that it only contains the number of threads required for the workload on it!


Functions provided by the thread pool:
1.Asynchronous call Function: Generally, all the called functions are synchronized, that is, the next code is executed after the function is returned. However, the thread pool can call our function asynchronously. After the function is called, the next sentence is executed immediately. The main thread does not know when to return the function.
In addition, please note that you should never call any method that allows you to create a thread by yourself. If necessary, the CLR thread pool will automatically create a thread and, if possible, reuse the existing thread. In addition, the thread will not be destroyed immediately after processing the callback method; it will return to the thread pool and prepare to process other work items in the queue. Using QueueUserWorkItem in System. Theading. TheadPool will make your application more effective, because you do not need to create or destroy threads for each client request.
For example, a thread pool is written below to Implement Asynchronous calling:

Using System;
Using System. Collections;
Using System. Threading;

Public class MyClass
{
Public static void Main ()
{
Console. WriteLine ("Main Thread: Queuing an aynchronous operation ");
ThreadPool. QueueUserWorkItem (new System. Threading. WaitCallback (MyAsyncOperation ));

Console. WriteLine ("Main Thread: Deming other operation ");
Console. WriteLine ("Main thread: Pausing to simulate doing other operations .");
Console. ReadLine ();
}

Static void MyAsyncOperation (object state)
{
Console. WriteLine ("ThreadPool thread: Perform aynchronous operation ");
Thread. Sleep (5000 );
}
}

Do you know what the result is? If it is a traditional call to the MyAsyncOperation function, you can certainly conclude that the output is:
Main Thread: Queuing an aynchronous operation
ThreadPool thread: Perform aynchronous operat
Main Thread: Deming other operation
Main thread: Pausing to simulate doing other operations
However, when a thread pool is used to create a thread to execute this function, the results are very different. The execution result is as follows:

2. Call the method at a certain interval:
If the application needs to execute a task at a certain time, or regularly execute a task. Use the thread pool.
The System. Threading. Timer class can construct such a function for you. The function prototype is as follows:

Public Timer (TimerCallback callback, Object state, Int32 dueTime, Int32 period );
Public Timer (TimerCallback callback, Object state, UInt32 dueTime, UInt32 period );
Public Timer (TimerCallback callback, Object state, Int64 dueTime, Int64 period );
Public Timer (TimerCallback callback, Object state, Timespan dueTime, TimeSpan period );

The user-defined thread function can be defined as follows: public delegate void TimerCallback (Object state );

Let's write an application that allows the thread pool thread to call a method immediately and call it again every 2000 milliseconds (or two seconds.
The following is a program:

Using System;
Using System. Collections;
Using System. Threading;

Public class MyClass
{
Static int Times = 0;
Public static void Main ()
{
Console. WriteLine ("Checking for status updates every 2 seconds .");
Console. WriteLine ("Hit Enter to terminate the sample ");
Timer timer = new Timer (new TimerCallback (CheckStatus), "Timeing", 0,2000 );

Console. ReadLine ();
}

Static void CheckStatus (object state)
{
Console. WriteLine ("Checking Status:" + Convert. ToString (state) + "+ (Times ++). ToString () +"'s ");
}
}

The output is as follows:

3. Call method when a single kernel object receives a signal notification
Jeffer Richter says:

Microsoft researchers found that many applications generate threads to wait for a single kernel object to receive a signal. Once the object receives a signal notification, this thread sends a notification to another thread, and then loops back, waiting for the object to send a signal again. Some developers even write several threads in the code, and each thread is waiting for an object. This is a huge waste of system resources. Therefore, if multiple threads in your application are waiting for a single kernel object to receive a signal notification, the thread pool will still be the best resource for you to improve application performance.

How should it be used?
To enable the thread pool thread to call your callback method when the kernel object receives a signal notification, you can use some static methods defined in the System. Threading. ThreadPool class again. To enable the thread pool thread to call the method when the kernel object receives a signal notification, your code must call an overloaded RegisterWaitHandle method.
Its prototype is as follows:

Public static RegisterWaitHandle RegisterWaitForSingleObject (
WaitHandle h, WaitOrTimerCallback callback, Object state,
UInt32 milliseconds, Boolean executeOnlyOnce );

Public static RegisterWaitHandle RegisterWaitForSingleObject (
WaitHandle h, WaitOrTimerCallback callback, Object state,
Int32 milliseconds, Boolean executeOnlyOnce );

Public static RegisterWaitHandle RegisterWaitForSingleObject (
WaitHandle h, WaitOrTimerCallback callback, Object state,
TimeSpan milliseconds, Boolean executeOnlyOnce );

Public static RegisterWaitHandle RegisterWaitForSingleObject (
WaitHandle h, WaitOrTimerCallback callback, Object state,
Int64 milliseconds, Boolean executeOnlyOnce );

The first parameter h indicates the kernel object you want to wait for, the second parameter callback indicates the called user thread function, and the third parameter state is the parameter passed to the user thread function, the fourth parameter milliseconds indicates the waiting time before the thread pool Kernel Object receives the signal notification. Generally,-1 is passed (same as the second parameter of the previously mentioned function WaitForSingleObject ), indicates unlimited timeout. If the Fifth parameter executeOnlyOnce is true, the thread pool thread will only execute the callback method once. However, if executeOnlyOnce is false, the thread pool thread will execute the callback method every time the kernel object receives a signal notification.

Client-defined function prototype: public delegate void WaitOrTimerCallback (Object state, Boolean timedOut );

When a callback method is called, the status data and Boolean value timedOut are passed to it. If timedOut is false, the method knows that it is called because the kernel object is notified by a signal. If timedOut is true, the method knows that it is called because the kernel object is not notified by signal within the specified time. The callback method should perform all required operations.

See the specific code:

Using System;
Using System. Collections;
Using System. Threading;

Public class MyClass
{
Public static void Main ()
{
AutoResetEvent are = new AutoResetEvent (false); // automatic event object
RegisteredWaitHandle rwh = ThreadPool. RegisterWaitForSingleObject (
Are, new WaitOrTimerCallback (EventSignalled), null,-1, false );
For (Int32 x = 0; x <5; x ++)
{
Thread. Sleep (5000 );
Are. Set ();
}

Rwh. Unregister (null );
Console. WriteLine ("Hit Enter to terminate the sample ");
Console. ReadLine ();

}

Static void EventSignalled (object state, Boolean timedOut)
{
If (timedOut)
Console. WriteLine ("Timed-out while waiting for the AutoResetEvent .");
Else
Console. WriteLine ("The AutoResetEvent became signalled .");
}
}

Run:

Jeff's original article: http://blog.chinaunix.net/article.php? ArticleId = 43400 & blogId = 5958

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.