[Translate] concurrent programming in. NET Core

Source: Internet
Author: User
Tags dotnet

Original address: Http://www.dotnetcurry.com/dotnet/1360/concurrent-programming-dotnet-core

Every computer we buy today has a multi-core CPU that allows it to execute multiple instructions in parallel. The operating system uses the benefits of this structure by dispatching processes to different cores.
However, asynchronous I/O operations and parallel processing can also help us improve the performance of individual applications.
In. NET core, tasks are the main abstraction for concurrent programming, but there are other support classes that can make our work easier.

Concurrent programming-Asynchronous vs. multithreaded code

Parallel programming is a broad term that we should explore by observing the differences between asynchronous methods and actual multithreading.
Although. NET Core uses tasks to express the same concepts, a key difference is the internal processing of the differences.
The Async method runs in the background while the calling thread is doing other things. This means that these methods are I/O intensive, that is, they are used most of the time for input and output operations, such as file or network access.
Whenever possible, it is meaningful to use asynchronous I/O methods instead of synchronous operations. At the same time, the calling thread can handle other requests while processing the user interaction in the desktop application or processing the server application, rather than just waiting for the operation to complete.

you can in my article Asynchronous Programming in C # using Async await–best practices read more about using async and await calls to async methods. This article is from the DNC Magazine (September issue).

Computationally intensive methods require CPU cycles to work and can only run in their dedicated background threads. The core number of CPUs limits the number of threads available for parallel runtime. The operating system is responsible for switching between the remaining threads, giving them the opportunity to execute code.
These methods are still executed concurrently, but do not have to be executed in parallel. Although this means that the method is not executed at the same time, it can be executed when other methods are paused.


Parallel vs concurrency


This article will focus on multithreaded concurrency programming in. NET core in the last paragraph.

Task Parallel Library

The. NET Framework 4 introduces the Task Parallel Library (TPL) as the preferred API for writing concurrent code.. NET core uses the same programming model.
To run a piece of code in the background, you need to wrap it as a task :

var backgroundtask = Task.run (() = docomplexcalculation);//do other workvar result = Backgroundtask.result;

The Task.run method receives a function (Func) when it is necessary to return a result, and the method Task.run receives an action when it is not necessary to return a result. Of course, a lambda expression can be used in all cases, as in the example above I call a long-time method with a parameter.
A thread in the thread pool will handle the task. The runtime of the. NET Core contains a default scheduler that uses the thread pool to process queues and perform tasks. You can implement your own scheduling algorithm by deriving the TaskScheduler class instead of the default, but this is beyond the scope of this article.
As we have seen before, I use the Result property to merge the called background threads. For threads that do not need to return a result, I can call Wait () instead. Both of these methods will be blocked to the background task to complete.
To avoid clogging the calling thread (such as in an ASP. NET core application), you can use the AWAIT keyword:

var backgroundtask = Task.run (() = docomplexcalculation);//do other workvar result = await backgroundtask;

This will allow the called thread to be freed to handle other incoming requests. Once the task is complete, an available worker thread will continue to process the request. Of course, the controller action method must be asynchronous:

Public async task<iactionresult> Index () {     //method body}
Handling Exceptions

When you merge two threads together, any exceptions thrown by the task are passed to the calling thread:

    • If Result or Wait () is used, they are packaged into aggregateexception. The actual exception is thrown and stored in its InnerException property.
    • If you use await, the original exception will not be packaged.

In both cases, the information for the call stack remains the same.

Cancel a task

Because tasks can be long-running, you might want to have an option to cancel the task in advance. To implement this option, you need to pass in the canceled token (token) When the task is created, and then use the token to trigger the Cancel task:

var tokensource = new CancellationTokenSource (), var cancellabletask = Task.run (() =>{for    (int i = 0; i <; I + +)    {        if (tokenSource.Token.IsCancellationRequested)        {//clean up            before exiting            TokenSource.Token.ThrowIfCancellationRequested ();        }        Do long-running processing    }    return;}, Tokensource.token);//Cancel the Tasktokensource.cancel (); try{    await Cancellabletask;} catch (OperationCanceledException e) {    //handle the exception}

In fact, in order to cancel the task in advance, you need to check the cancellation token in the task and respond when it needs to be canceled: Call throwifcancellationrequested () to exit the task after performing the necessary cleanup operations. This method will throw operationcanceledexception to perform the appropriate processing in the calling thread.

Coordinate multi-tasking

If you need to run multiple background tasks, here are some ways to help you.
To run multiple tasks at the same time, simply start them continuously and collect their references, such as in an array:

var backgroundtasks = new []{    task.run (() = Docomplexcalculation (1)),    task.run (() = Docomplexcalculation (2)),    task.run (() = Docomplexcalculation (3)};

Now you can use the static methods of the Task class, waiting for them to be executed asynchronously or synchronously.

Wait Synchronouslytask.waitany (backgroundtasks); Task.waitall (backgroundtasks);//wait asynchronouslyawait task.whenany (backgroundtasks); await Task.whenall ( Backgroundtasks);

In fact, both of these methods will eventually return to the self of all tasks and can be manipulated again like any other task. To get the results of the corresponding task, you can check the task's Result property.
Handling multitasking exceptions is a bit tricky. Methods WaitAll and WhenAll throw exceptions whenever a task is collected to an exception. However, for WAITALL, all exceptions will be collected to the corresponding InnerExceptions property, and for WhenAll, only the first exception will be thrown. To confirm which task throws which exception, you need to check the Status and Exception properties of each task separately.
Care must be taken when using WaitAny and WhenAny. They wait until the first task finishes (success or failure), and no exception is thrown even if a task has an exception. They will only return the index of the completed task or return the completed task separately. You must wait until the task finishes or accesses its result property to catch the exception, for example:

var completedtask = await task.whenany (backgroundtasks); try{    var result = await completedtask;} catch (Exception e) {    //handle Exception}

If you want to run multiple tasks in succession, instead of concurrent tasks, you can use continuation (continuations) mode:

var compositetask = Task.run (() = Docomplexcalculation (())    . ContinueWith (previous = Doanothercomplexcalculation (Previous. Result),         taskcontinuationoptions.onlyonrantocompletion)

The ContinueWith () method allows you to execute multiple tasks one after another. This continuation task gets a reference to the result or status of the previous task. You can still increase the condition to determine if the continuation is performed, for example, only if the previous task successfully executes or throws an exception. Increased flexibility by comparing continuous waits for multiple tasks.
Of course, you can combine continuations with all the features discussed earlier: Exception handling, canceling, and parallel running tasks. There is a lot of room for performance, which can be combined in different ways:

var multipletasks = new[]{    task.run (() = Docomplexcalculation (1)),    task.run (() = Docomplexcalculation (2)),    task.run (() = Docomplexcalculation (3))};var Combinedtask = Task.whenall (multipletasks); var Successfulcontinuation = combinedtask.continuewith (task =        combineresults (Task). Result), taskcontinuationoptions.onlyonrantocompletion), var failedcontinuation = combinedtask.continuewith (task = >        HandleError (Task. Exception), taskcontinuationoptions.notonrantocompletion), await Task.whenany (Successfulcontinuation, Failedcontinuation);
Task synchronization

If the task is completely independent, then the coordination method we have just seen is sufficient. However, once you need to share data at the same time, there must be additional synchronization in order to prevent data corruption.
When two and more threads update a data structure at the same time, the data quickly becomes inconsistent. Just like this example code:

var counters = new dictionary< int, int > (); if (counters. ContainsKey (key)) {    Counters[key] + +;} else{    Counters[key] = 1;}

When multiple threads execute the above code at the same time, executing instructions in a specific order in different threads can result in incorrect data, such as:

    • All threads will check if the same key exists in the collection
    • As a result, they will go to the Else branch and set the value of this key to 1
    • The final result will be 1, not 2. If the code is executed in succession, it will be the expected result.

In the code above, the critical section (critical) allows only one thread to enter at a time. In C #, you can use the lock statement to implement:

var counters = new dictionary< int, int > (); Lock (SyncObject) {    if (counters. ContainsKey (key))    {        counters[key]++;    }    else    {        Counters[key] = 1;    }}

In this method, all threads must share the same syncobject. As a best practice, SyncObject should be a dedicated Object instance that is specifically designed to protect access to a separate critical section from external access.
In the lock statement, only one thread is allowed to access the code block inside. It will block the next thread that tries to access it until the previous thread exits. This ensures that the thread executes the critical section code completely without being interrupted by another thread. Of course, this will reduce parallelism and slow down the overall execution of your code, so you might want to minimize the number of critical sections and make them as short as possible.

Use the Monitor class to simplify the lock declaration:

var Lockwastaken = False;var temp = syncobject;try{    monitor.enter (temp, ref lockwastaken);    Lock Statement body}finally{    if (lockwastaken)    {        monitor.exit (temp);    }}

Although you want to use the lock statement most of the time, the Monitor class can give you extra control when you need it. For example, you can use TryEnter () instead of Enter () and specify a time limit to avoid indefinitely waiting for a lock to be released.

Other synchronization Primitives

Monitor is just a member of many synchronization primitives in. NET Core. Depending on the situation, other primitives may be more appropriate.

The mutex is a more heavyweight version of Monitor, relies on the underlying operating system, provides synchronous access to resources across multiple processes [1], and is a recommended alternative for synchronizing mutexes.

Semaphoreslim and Semaphore can limit the maximum number of simultaneous accesses to a resource, not just one thread, as in Monitor. Semaphoreslim is lighter than Semaphore, but is limited to a single process. If possible, you might want to use Semaphoreslim instead of Semaphore.

ReaderWriterLockSlim can differentiate between two ways to access a resource. It allows an unlimited number of readers (readers) to access resources at the same time, and restricts only one writer (writers) to access the locked resources at the same time. Thread-safe reads, but requires exclusive resources to modify data, protecting resources well.

AutoResetEvent, ManualResetEvent, and ManualResetEventSlim will block incoming threads until they receive a signal (that is, call Set ()). Then the waiting thread will continue to execute. AutoResetEvent will block until the next call to Set () and allow only one thread to continue execution. ManualResetEvent and ManualResetEventSlim do not clog the thread unless Reset () is called. ManualResetEventSlim is lighter and more recommendable than the previous two.

Interlocked provides a choice-atomic operation, which is a better alternative to locking and other synchronization primitives (if applicable):

Non-atomic operation with a Locklock (syncobject) {    counter++;} Equivalent atomic operation that doesn ' t require a lockinterlocked.increment (ref counter);
Concurrent Collections

When a critical section needs to ensure atomic access to data structures, the specialized data structures used for concurrent access can be a better and more efficient alternative. For example, using concurrentdictionary instead of Dictionary, you can simplify the lock statement example:

var counters = new concurrentdictionary< int, int > (); counters. TryAdd (key, 0); Lock (SyncObject) {    counters[key]++;}

Naturally, it is possible to do the following:

Counters. AddOrUpdate (Key, 1, (Oldkey, oldValue) = OldValue + 1);

Because the delegate for update is a method outside the critical section, the second thread may read to the same old value before the first thread updates the value, effectively overwriting the update value of the first thread with its own value, which loses an increment. Using concurrent collections incorrectly is also an unavoidable problem with multithreading.
Another alternative to concurrent collections is the immutable collection (immutable collections).
Similar to concurrent collections, is also thread-safe, but the underlying implementation is not the same. Any action that alters the data structure will not change the original instance. Instead, they return a changed copy and keep the original instance intact:

var original = new dictionary< int, int > (). Toimmutabledictionary (); var modified = original. ADD (key, value);

Therefore, any changes to the collection in one thread are not visible to other threads. Because they still refer to the original unmodified collection, this is why the immutable collection is inherently thread-safe.
Of course, this makes them effective for solving problems with different sets. The best case is that multiple threads modify the data independently in the same input collection, and in the last step it is possible to merge changes for all threads. Using a regular collection, you need to create a copy of the collection for each thread in advance.

Parallel LINQ (PLINQ)

Parallel LINQ (PLINQ) is an alternative to the Task Parallel Library. As the name implies, it relies heavily on LINQ (language-Integrated query) functionality. It is useful for scenarios in which the same expensive operations are performed in large collections. Unlike normal LINQ to Objects, where all operations are executed sequentially, PLINQ can perform these operations on multiple CPUs in parallel.
The code churn needed to exploit the benefits is also minimal:

Sequential Executionvar sequential = Enumerable.range (0, +)    . Select (n = expensiveoperation (n))    . ToArray ();//parallel Executionvar parallel = Enumerable.range (0, +)    . AsParallel ()    . Select (n = expensiveoperation (n))    . ToArray ();

As you can see, the difference between the two code fragments is simply to call AsParallel (). This converts IEnumerable to ParallelQuery, which causes the part of the query to run in parallel. To switch back to the sequential execution, you can call Assequential () and it will return a IEnumerable again.
By default, PLINQ does not preserve the order in the collection to make the process more efficient. But when the order is important, you can call AsOrdered ():

var parallel = Enumerable.range (0, +)    . AsParallel ()    . AsOrdered ()    . Select (n = expensiveoperation (n))    . ToArray ();

Similarly, you can switch back by calling AsUnordered ().

Concurrent programming in the full. NET Framework

Because. NET core is a simplified implementation of the complete. NET Framework, all parallel programming methods in the. NET Framework can also be used in. NET Core. The only exceptions are immutable collections, which are not part of the complete. NET Framework. They are distributed as separate NuGet packages (System.Collections.Immutable) that you need to install and use in your project.

Conclusion:

Whenever your application contains CPU-intensive code that can run in parallel, it makes sense to take advantage of concurrent programming to improve performance and improve hardware utilization.
The APIs in. NET Core abstract Many details, making it easier to write concurrent code. However, there are some potential problems to be aware of, most of which involve accessing shared data from multiple threads.
If you can, you should avoid this situation altogether. If not, be sure to select the most appropriate synchronization method or data structure.

[Translate] concurrent programming in. NET Core

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.