[C # advanced series] 26. asynchronous operations restricted by computing,

Source: Internet
Author: User

[C # advanced series] 26. asynchronous operations restricted by computing,

What is a computing-restricted Asynchronous Operation? When a thread is using a CPU for computing, it is called a computing limit.

The corresponding IO limitation is that the thread is handed over to the IO Device (keymouse, network, file, etc ).

Chapter 4 threads basically describes how to use dedicated threads for computing restrictions. However, the overhead of creating dedicated threads is high, and too many threads are also a waste of memory resources, this chapter will discuss a better method, namely the thread pool technology.

CLR Thread Pool

CLR contains code to manage its own thread pool. A thread pool is a collection of threads that can be used by applications. Each CLR has a thread pool, which is shared by all the AppDomains on the CLR.

No threads exist in the thread pool during CLR initialization.

An operation request queue is maintained in the thread pool. When an application performs asynchronous operations, it calls a method to append a record entry to the thread pool queue.

The code of the thread pool extracts the record item from the queue and distributes the record item to a thread pool thread. If there is no thread in the thread pool, a new thread is created.

After the thread pool thread completes the task, the thread will not be destroyed. On the contrary, the thread returns the thread pool where it enters the idle state and waits for another request to be responded. ,

Because the thread pool does not destroy itself and does not need to create a new thread for asynchronous operations again, no additional performance loss is incurred.

If an application sends many record items to the thread pool queue, the thread pool will initially try to use only one thread to serve all record items, however, if the speed of adding record items exceeds the speed of the thread pool thread processing record items, an additional thread will be created.

If you no longer send requests to the thread pool, there are a large number of threads that do nothing in the pool. These idle thread pool threads will wake up to terminate themselves and release resources after a period of time.

It should be clear here that if the Garbage Collector helps us automatically recycle garbage, the thread pool technology will help us automatically manage threads.

Use thread pool technology to perform simple computing restrictions

The code is easier to understand without much explanation. You can compare the code in the previous chapter that uses special threads to perform asynchronous operations on computing restrictions. This makes it easier to understand:

Static void Main (string [] args) {ThreadPool. queueUserWorkItem (thread callback function, "hello"); Console. writeLine ("logging entry into thread pool queue"); Console. read ();} private static void Thread callback function (Object state parameter) {Thread. sleep (10000); if (status parameter. getType () = typeof (string) {Console. writeLine ("this is a string");} else {Console. writeLine ("not recognized ");}}

Execution Context

Each thread is associated with an execution context data structure.

The execution context includes

Security Settings (the prinal Al attribute of the compression station, Thread, and Windows identity ),

Host settings (see System. Threading. HostExecutionContextManager)

And logical call context data (see the LogicalSetData and LogicalGetData methods of syem. Runtime. Remoting. Messaging. CallContext ).

When a thread executes code, some operations may use the execution context structure.

When a thread uses another thread to execute a task, the execution context of the former is copied to the execution context of the latter. This ensures that the same security settings and host settings are used for any operation. It also makes sure that any data stored in the context of logical calls in the thread applies to another thread in use.

By default, CLR automatically copies the execution context of the initial thread to any auxiliary thread.

This will cause performance impact, because it takes a lot of time to collect context information and copy it to the auxiliary thread. If there are other auxiliary threads in the auxiliary thread, the overhead will be greater.

The System. Threading namespace has an ExecutionContext class, that is, the execution context class, which allows you to control whether the execution context of a thread is copied to another thread.

There are three common methods: SuppressFlow (cancel replication execution context), RestoreFlow (Restore replication execution context), and IsFlowSuppressed (whether context replication is canceled ).

Code on, simpler:

Static void Main (string [] args) {CallContext. logicalSetData ("operation", "put a key-Value Pair in execution context"); ThreadPool. queueUserWorkItem (state => Console. writeLine ("First time" + CallContext. logicalGetData ("operation"); ExecutionContext. suppressFlow (); // cancel the ThreadPool for copying execution context between asynchronous threads. queueUserWorkItem (state => Console. writeLine ("second" + CallContext. logicalGetData ("operation"); ExecutionContext. restoreFlow (); // restore the replication ThreadPool of execution context between asynchronous threads. queueUserWorkItem (state => Console. writeLine ("third time" + CallContext. logicalGetData ("operation"); Console. read ();}

The code execution result is as follows:

Because it is an asynchronous operation, the execution order is different, but here we only focus on the execution results. The second time, we did not copy the execution context to another thread.

In addition, it is not only a thread pool, but also a dedicated thread.

Collaborative cancellation and timeout

. NET provides a standard collaborative cancel operation mode, meaning that the operation to be canceled must be displayed to support cancellation.

That is to say, the types mentioned in this section must be used no matter the code for executing the operation or the code for canceling the operation.

To cancel the operation, you must first create a CancellationTokenSource object.

This object contains all statuses related to management cancellation. You can obtain one or more CancellationToken instances from the Token attribute of the object and send them to the operation to cancel the operation.

CancellationToken is a lightweight value type that contains a single private field, that is, a reference to its CancellationTokenSource object.

In the cycle of computing restriction operations, you can regularly call the IsCancellationRequested attribute of CancellationToken to check whether the cycle should be terminated in advance and terminate the operation.

The following is the Demo code:

Static void Main (string [] args) {var cts = new CancellationTokenSource (); ThreadPool. queueUserWorkItem (state => Farm (cts. token, 850); // Farm a treatment ring Console. writeLine ("press enter to cancel Farm"); Console. readLine (); cts. cancel (); // Cancel the Farm operation Console. read ();} // The private static void Farm (CancellationToken token, int money) {var currentMoney = 0; while (currentMoney <money) is returned for a specified amount of money) {if (token. isCancellationRequested) {Console. writeLine ("confirm to cancel Farm"); break;} currentMoney + = 50; Console. writeLine ("Troy has Farm" + currentMoney + "gold"); Thread. sleep (1000); // Add a soldier in one second }}

On:

If you want to cancel the Farm operation, you can pass CancellationToken. None.

You can call the CancellationToken's Register Method to Register one or more functions called when the operation is canceled. ,

You can use the CreateLinkedTokenSource function of CancellationTokenSource to link other CancellationTokenSource objects to create A new object A. If any linked object is canceled, A is also canceled.

The constructor delay variable sent to CancellationTokenSource indicates that CancellationTokenSource is automatically canceled after a specified period of time.

Task
The QueueUserWorkItem method of ThreadPool initiates an asynchronous computing restriction operation. However, there is no mechanism for us to know when this operation is completed, or to obtain the return value when the operation is completed.

To overcome this limitation and solve other problems, Microsoft introduced the concept of task. (Use the type in the System. Threading. Task namespace to use the Task)

The following code compares the thread pool and task gameplay.

ThreadPool. queueUserWorkItem (thread callback function, "hello"); // thread pool new Task (thread callback function, "hello "). start (); // how to play a task: 1Task. run () => thread callback function ("hello"); // task method 2

When constructing a Task object, you can also pass the CancellationToken for canceling the Task, or pass the TaskCreationOptions flag to control the execution method of the Task.

Next, write a code segment to see how the characters wait for the task to complete and obtain the results.

Static void Main (string [] args) {Task <Tuple <Boolean, String> myTask = new Task <Tuple <bool, string> (bounty Task, 100); myTask. start (); Thread. sleep (1, 10000); Console. writeLine ("the task is in progress"); myTask. wait (); // displays the Console waiting for the task to end. writeLine ("task Result:" + myTask. result. item2); Console. readLine ();} private static Tuple <Boolean, String> bounty task (object state) {Console. writeLine ("Troy took over this bounty task and obtained {0} gold", state. toString (); return new Tuple <bool, string> (true, "successful ");}

Tuple <Boolean, string> is the result type returned by the Task. The generic variables for the Task should be the same as the returned values of the called function.

The result is as follows:

From this Result, we know that the task is indeed executed asynchronously and the correct Result is indeed returned to myTask. Result.

When the Wait () function is called, the current thread is blocked until the task ends. (If start is not used and wait is used directly, the task will also be executed. But the thread will not be blocked at this time, and it will directly execute the task and return immediately)

In addition to waiting for a single Task, the Task also provides two static methods WaitAny and WaitAll to block the thread, waiting for a Task array until all tasks in the array are completed.

Cancel task

You can also use CancellationToken to cancel a task.

It is the same in other places. However, when determining whether the task is canceled, the ThrowIfCancellationRequested () method of the CancellationToken object should be used instead of IsCancellationRequested.

The reason is that unlike the QueueUserWorkItem in the thread pool, the task can be completed or a value can be returned. Therefore, you need to separate the completed task from the faulty task in one way.

If the task throws an exception, you can know that the task has not been running until the end.

Code:

Static void Main (string [] args) {var cts = new CancellationTokenSource (); Task <Tuple <Boolean, String> myTask = Task. run () => rewards task (cts. token, 100), cts. token); Thread. sleep (1, 5000); cts. cancel (); try {Console. writeLine ("task Result:" + myTask. result. item2);} catch (aggresponexception ex) {// treat any OperationCanceledException object as handled // any other exception will throw a new aggresponexception // contains only unprocessed exceptions ex. handle (e => e is OperationCanceledException); // call the Console of the handler for every exception in the exception set. writeLine ("cancel task");} catch {Console. writeLine ("unknown exception");} Console. read ();} private static Tuple <Boolean, String> CancellationToken ct (object state) {for (int I = 0; I <100; I ++) {ct. throwIfCancellationRequested (); Console. writeLine ("Troy took over this bounty task and obtained {0} gold", state. toString (); Thread. sleep (1000);} return new Tuple <Boolean, String> (true, "successful ");}

Result:

The new task is automatically started when the task is completed.

Well-scalable software should not use thread blocking.

When you call Wait or query the Result attribute of a task before the task is completed, it is very likely that a new thread is created in the thread pool.

The following method knows when the task ends without blocking.

Task <Tuple <Boolean, String> myTask = Task. run () => rewards task (cts. token, 100), cts. token); // create and start a Task myTask1 = myTask. continueWith (task => Console. writeLine ("task Result:" + task. result. item2 ));

You can also pass it the TaskContinuationOptions bit flag to control the task to continue. By default, if no TaskContinuationOptions flag is specified, the second task will be executed no matter whether the first task is canceled or failed.

The task created in the function called by the task is called a subtask. It has some processing information about the parent task and subtask. It is controlled by TaskContinuationOptions or TaskCreationOptions.

In fact, a Task object contains a set of ContinueWith tasks. That is to say, a Task can perform ContinueWith multiple times. After the Task is completed, it will execute all the ContinueWith tasks.

Internal secrets of tasks

Each Task object has a set of fields that constitute the Task status.

Although a task is useful, it also has a cost. Memory must be allocated to all these States. If you do not need additional functions of the task (that is, you know when the task ends and can return values), you can use the ThreadPool QueueUserWorkItem to achieve better resource utilization.

The read-only attribute Status of the Task object returns a TaskStatus enumeration value, which indicates the Status of the Task.

After a task is Created, the task status is Created. After the task is started, the task status is WatingToRun. After the task is run in a thread, the task status is Running. When the task is stopped and waiting for any subtask, the task is WaitingForChildrenToComplete.

When the process is completed, enter the following statuses: RanToCompletion (complete), Canceled (cancel), and Faulted (error ).

If the task has an error, You can query the Exception attribute of the task to obtain the unprocessed Exception thrown by the task. It always returns an aggresponexception object. Its InnerExceptions collection contains all unprocessed exceptions.

The Task object created by calling methods such as ContinueWith, ContinueWhenAll, ContinueWhenAny or FromAsync is in the WatingForActivation state. This status indicates that the task is created implicitly and starts automatically.

Task Factory

Sometimes you need to create a group of Task objects that share the same configuration. To avoid mechanically passing the same parameters to the constructor of each Task, you can create a Task factory to encapsulate General configurations.

The TaskFactory type is for this purpose.

When creating a factory class, you must pass to the constructor the default values of all tasks to be created, namely, CancellationToken, TaskScheduler, TaskCreationOption, and TaskContinuationOptions.

Here is a simple demonstration:

Var tf = new TaskFactory <Int32> (cts. token, TaskCreationOptions. attachedToParent, TaskContinuationOptions. executeSynchronously, TaskScheduler. default); // create three tasks in the task factory var childTasks = new [] {tf. startNew () => {Console. writeLine ("Task 1"); return 1;}), tf. startNew () => {Console. writeLine ("Task 2"); return 2;}), tf. startNew () => {Console. writeLine ("Task 3"); return 3 ;}}; tf. continueWhenAll (childTasks, co MpletedTask => completedTask. Where (t =>! T. IsFaulted &&! T. isCanceled ). max (t => t. result), CancellationToken. none ). continueWith (t => Console. writeLine ("the final result returned by the task is" + t. result), TaskContinuationOptions. executeSynchronously); Console. read ();

This is only the basic usage. You only need to pass a Token to the task when canceling the task. Once canceled, the tasks in the entire task array will be canceled.

Task Scheduler

The basic structure of the task is flexible, and the TaskScheduler object is indispensable.

This object is responsible for executing the scheduled task. FCL provides two types derived from TaskScheduler: thread pool task scheduler and synchronization context scheduler. By default, all applications use the thread pool Task Scheduler.

The synchronization context task scheduler is suitable for applications that provide graphical user interfaces. It schedules all tasks to the GUI thread of the program so that all task code can successfully update the UI component. This scheduling does not use a thread pool.

You can run the static FromCurrentSynchronizationContext () method of TaskScheduler to obtain references to the synchronization context Task Scheduler.

It seems that I can't use this method. I am a web engineer. I have read the examples very easily, so I will not write them here.

Parallel static For, ForEach, and Invoke Methods

Parallel indicates parallelism.

It is mainly used to implement multithreading of common for or foreach loops with tasks to improve performance.

The System. Threading. Tasks. Parallel class encapsulates these situations, such as the following code:

For (int I = 0; I <1000; I ++) DoSomething (I); // for Loop To Do Something Parallel. for (0, 1000, I => DoSomething (I); // Parallel alternative solution, thread pool Parallel processing work foreach (var item in collection) DoSomething (item ); // foreach Loop To Do Something Parallel. forEach (collection, l => DoSomething (l); // Parallel alternative solution, thread pool For Parallel processing of work // If you can use For instead of ForEach, then use, because it is faster // execute all methods Method1 (); Method2 (); Method3 (); // Parallel alternative scheme sequentially execute all Methods Parallel. invoke () => Method1 (), () => Method2 (), () => Method3 ());

If the call thread finishes its work before the thread pool completes the task, the call thread suspends and waits for the task to complete.

However, when calling the Parallel method, please note that the work must be executed in Parallel. If it must be executed in sequence, it is better to use the original for loop.

If there are a large number of work items (that is, a large number of cycles), or a large amount of work is involved in each loop, the performance of Parallel will be greatly improved. On the contrary, the performance may not be worth the candle.

All Parallel methods can receive a ParallelOptions object, which can be used to configure the working method of Parallel.

You can also pass a ParallelloopState object to control the execution of cyclic tasks.

The Stop method of this object to Stop the loop, and the Break makes the loop no longer process the subsequent work.

Parallel Language Integrated Query (PLINQ)

LINQ provides a simple syntax to query data sets. However, only one thread can process all items in the dataset sequentially. This is the sequential query.

To improve the performance, you can use PLINQ, that is, parallel LINQ. It converts a sequential query to a parallel query.

Static System. linq. parallelEnumerable class (in the System. core. all functions of PLINQ are implemented in the dll. Therefore, you must use the C # using command. import the source code to the Linq namespace.

All the parallel versions of Where, Select, and other methods are System. Linq. ParallelQuery <T> type extension methods.

The following is a simple example:

List <string> nameList = new List <string> () {"Troy", "Small three", "Small four"}; var query = from name in nameList. asParallel () // enables Parallel query and converts it to ParallelQuery <string> let myName = "my name is" + name where name = "Troy" select myName; Parallel. forEach (query, l => Console. writeLine (l); // query. forAll (l => Console. writeLine (l); // you can also use this line of code to replace the previous sentence. ParallelQuery has a ForAll method that executes the content Console for each query result. read ();

The preceding example only demonstrates the gameplay without considering the efficiency.

Through the above example, we can see that there is no difference between PLINQ and LINQ, as long as the set is called AsParallel.

If you want to convert parallel queries to parallel queries, you can use AsSequential ().

In the above example, sequential query is actually much faster. In addition, the Console synchronizes threads to ensure that only one thread can access the Console window at a time. Therefore, concurrent operations may damage the performance.

Since PLINQ is used to process data in parallel, the returned data is in an unordered structure. To maintain the order, the AsOrdered method should be called. After the method is called, the data will be processed in groups and then combined to maintain the order, as you can imagine, this will also consume performance.

The following operators also claim unordered operations: Distinct, Distinct T, Intersect, Union, Join, GroupBy, GroupJoin, and ToLookup. If these operations are to be sorted, The AsOrdered method must be called.

Colleague PLINQ provides some additional methods:

WithCancellation (cancellation allowed ),

WithDegreeOfParallelism (specify the maximum number of threads ),

WithExecutionMode (pass the ParallelExecutionMode flag ),

WithMergeOptions (PLINQ allows multiple threads to merge data after processing. Therefore, you can pass the ParallelMergeOptions parameter to control the buffer and merge mode of the result. Buffer tends to speed up, and no buffer tends to save memory ).

Scheduled computing restrictions

The System. Threading namespace has a Timer class that can perform scheduled operations.

Internally, the thread pool uses only one thread for all Timer objects. This thread knows when the next Timer object expires. After expiration, the thread will be awakened and the QueueUserWorkItem of the thread pool will be called internally to add the work item to the thread pool queue.

This is very common. (In the garbage collection chapter, if the Timer object does not seem to be used in the code, it will be recycled, so there must be variables to keep the Timer object alive)

If you want to periodically execute an operation, you can use the static Delay method of the Task and the async and await Keywords of C. (As described in the next chapter, here is only a simple example)

Static void Main (string [] args) {asyncDoSomething (); Console. read ();} private static async void asyncDoSomething () {while (true) {Console. writeLine ("time is {0}", DateTime. now); // The await Task is delayed for two seconds without blocking the thread. delay (2000); // await allows the thread to return // after 2 seconds, a thread will intervene and continue to loop after await }}

How to manage threads in a thread pool

CLR allows developers to set the maximum number of threads to be created in the thread pool. (However, if this value is set, hunger and deadlocks may occur ).

The maximum number of threads by default is about 1000.

Some static methods of the Threadpool class, such as GetMaxThreads and SetMinThreads, can be used to limit the number of threads in the thread pool. However, this is not recommended by the author.

 

Threadpool. the QueueUserWorkItem method and Timer class always place work items in a global queue of a thread pool (in the first-in, first-out mode). Therefore, multiple worker threads may retrieve work items from this queue at the same time. To ensure that multiple worker threads do not get one work item, all worker threads actually compete for the same thread synchronization lock.

For a task, when a non-worker thread calls a task (with a non-default TaskScheduler Task Scheduler), the task is put into the global queue.

When the worker thread schedules tasks, they all have their own local queues. When the worker thread is preparing to process work items, check the local queue first. Because the worker thread is the only local queue that allows access to itself, no thread synchronization lock is required here. (Therefore, it is very fast to delete and add tasks in the local queue. The processing of the local queue uses the post-in-first-out mode)

If the local queue of a worker thread is empty, it will execute the job from other queues and request to obtain a thread synchronization lock.

If the local queue of all worker threads is empty, check the global queue at this time.

If the global queue is empty, the worker thread goes to sleep and waits for the event to happen.

If a sleep event is too long, it will automatically wake up and destroy itself.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.