. NET series: One of concurrent programming "The preliminary Theory of concurrent programming"

Source: Internet
Author: User

A few misunderstandings about concurrent programming

1) concurrency is multithreading

In fact, multithreading is only a form of concurrent programming, there are many other concurrent programming techniques in C #, including asynchronous programming, parallel programming, TPL data flow, responsive programming, and so on.

2) Only large servers need to consider concurrency

Large programs on the server side respond to requests for data from a large number of clients, with the full consideration of concurrency. However, desktop programs and mobile applications such as mobile phones and tablets also need to consider concurrent programming because they are directed towards end users, and now users are increasingly demanding the experience. The program must be able to respond to the user's actions at any time, especially when working in the background (reading and writing data, communicating with the server, etc.), which is one of the purposes of concurrent programming.

3) Concurrent programming is complex and must be mastered by many underlying technologies

C # and. NET provide many libraries, and concurrent programming has become much simpler. In particular,. NET 4.5 introduces a new async and await keyword, which minimizes the code for concurrent programming. Parallel processing and asynchronous development are no longer the Masters ' patents, and each developer can write concurrent programs that are interactive, efficient, and reliable.

Ii. concurrent terminology of several names

    • Concurrency: Multi-tasking for colleagues
    • Multithreading: A form of concurrency that takes multiple threads to perform processing.
    • Parallel processing (parallel programming): The execution of a large number of tasks divided into several small pieces, assigned to multiple simultaneous running threads, is a form of multithreading.
    • Asynchronous programming: A form of concurrency that uses the future module or callback (callback) mechanism to avoid clogging.
    • Responsive programming: A declarative programming pattern in which programs respond to events.

Three, introduction to asynchronous programming

There are two major benefits to asynchronous programming. The first benefit is a GUI program for end-users: Asynchronous programming improves responsiveness. We have all encountered programs that temporarily lock the interface at runtime, and asynchronous programming allows the program to still respond to user input when performing tasks. The second benefit is for server-side applications: Asynchronous programming enables extensibility. Server applications can use the thread pool to meet their extensibility, and scalability can usually be increased by an order of magnitude after using asynchronous programming. Modern asynchronous. NET programs use two keywords: async and await. The Async keyword is added to the method declaration, and its main purpose is to make the await keyword in the method take effect (in order to maintain backward compatibility and introduce these two keywords). If the Async method has a return value, it should return task<t>; if there is no return value, the task should be returned. These task types are equivalent to the future and are used to notify the main program at the end of an asynchronous method.

Let me give you an example:

1 AsyncTask Dosomethingasync ()2 {3    intval = -;4   //wait 1 seconds in an asynchronous manner5    awaitTask.delay (Timespan.fromseconds (1));6Val *=2;7 8    //wait 1 seconds in an asynchronous manner9    awaitTask.delay (Timespan.fromseconds (1));Ten    Trace.WriteLine (val); One}

The Async method executes synchronously at the beginning. Inside the Async method, the await keyword performs an asynchronous wait on its parameters. It first checks whether the operation has completed and, if it is done, continues to run (synchronous mode). Otherwise, it pauses the Async method and returns, leaving an unfinished task. After some time, the operation is completed and the Async method resumes running.

An async method consists of several blocks of synchronous execution, separated by an await statement between each Synchronizer block. The first Synchronizer block runs in the thread that calls this method, but where does the other Synchronizer block run? The situation is more complicated. The most common scenario is to wait for a task to complete with an await statement, and when the method pauses at an await, the context is captured. If the current SynchronizationContext is not empty, this context is the current SynchronizationContext. If the current SynchronizationContext is empty, then this context is the current TaskScheduler. The method will continue to run in this context. In general, the UI context is used when the UI thread is running, and the ASP. NET request context is used when processing the ASP, and in many other cases the thread pool context is used.

There are two basic ways to create a task instance. Some tasks represent the instructions that the CPU needs to actually execute, and when the task of creating such a calculation class is used, use Task.run (taskfactory.startnew if required to run according to a specific schedule). Other tasks represent a notification (notification) that uses taskcompletionsource<t> when creating this event-based task. Most I/O-type tasks use taskcompletionsource<t>.

When using async and await, it is natural to handle errors. In the following code, Possibleexceptionasync throws a NotSupportedException exception, and the Trysomethingasync method can catch the exception very smoothly. This caught exception completely preserves the stack trajectory and does not artificially encapsulate it into the targetinvocationexception or AggregateException class:

1 AsyncTask Trysomethingasync ()2 {3   Try4  {5     awaitPossibleexceptionasync ();6  }7  Catch(NotSupportedException ex)8  {9    Logexception (ex);Ten    Throw; One  } A}

Once an Async method throws (or passes out) an exception, the exception is placed in the returned Task object, and the status of the Task object becomes "completed." When an await invokes the Task object, the await obtains and (re) throws the exception, preserving the original stack trajectory. Therefore, if Possibleexceptionasync is an async method, the following code will work:

1 AsyncTask Trysomethingasync ()2 {3 //When an exception occurs, the task ends. Does not throw an exception directly. 4Task task =Possibleexceptionasync ();5    Try6    {7         //The exception in the Task object, which is raised in this await statement8 9         awaittask;Ten    } One    Catch(NotSupportedException ex) A    { -        Logexception (ex); -        Throw; the    } -}

There is also an important guideline for async methods: Once you have used async in your code, it's best to always use it. When you call an async method, you should (at the end of the call) wait for the task object it returns with await. Be sure to avoid using task.wait or task<t>. Result methods, because they cause deadlocks. Consider the following method:

1 AsyncTask Waitasync ()2 {3     //here Awati will capture the current context ...4      awaitTask.delay (Timespan.fromseconds (1));5     //... This will attempt to continue execution with the context captured above6 }7 voidDeadlock ()8 {9    //Start delayTenTask task =Waitasync (); One    //synchronizer block, waiting for asynchronous method to complete A    task. Wait (); -}

If this code is called from the context of the UI or the ASP. NET, a deadlock occurs. This is because both contexts can only run one thread at a time. The Deadlock method calls the Waitasync method, and the Waitasync method starts calling the delay statement. Then, the Deadlock method (synchronous) waits for the Waitasync method to complete while blocking the context thread. When the delay statement ends, await attempts to continue running the Waitasync method in the captured context, but this step cannot succeed because there is already a blocked thread in the context, and this context allows only one thread to run at the same time. There are two ways to avoid deadlocks: using Configureawait (False) in Waitasync (causing await to ignore the context of the method), or calling the Waitasync method with an await statement (making deadlock an asynchronous method).

Iv. Introduction to Parallel programming

If you have a large number of computational tasks in your program, and these tasks can be split into several separate task blocks, you should use parallel programming. Parallel programming can temporarily improve CPU utilization to improve throughput, which is useful if the CPU in the client system is often idle, but is often not appropriate for the server system. Most servers themselves have parallel processing capabilities, such as ASP. NET can handle multiple requests in parallel. In some cases, it is still useful to write parallel code in the server system (if you know that the number of concurrent users is always a minority). However, in general, parallel programming on a server system reduces its parallel processing power and does not have practical benefits. There are two forms of parallelism: Data parallelism and task parallelism (Parallelim). Data parallelism refers to the large amount of data that needs to be processed, and the process of each piece of data is essentially independent of each other. Task parallelism refers to the need to perform a large number of tasks, and the execution of each task is essentially independent of each other. Task parallelism can be dynamic, and these new tasks can be added to the task pool if the results of a task's execution generate additional tasks.

There are several different approaches to implementing data parallelism. One approach is to use the Parallel.ForEach method, which is similar to the ForEach loop, and should be used as much as possible.

The Parallel class provides the Parallel.For and foreach methods, which are similar to for loops, and can be used when the data processing process is based on an index. Here is an example code that uses Parallel.ForEach:

1 void float degrees) 2 {3     Parallel.ForEach (matrices, matrix = Matrix. Rotate (degrees)); 4 }

Another approach is to use PLINQ (Parallel LINQ), which provides a asparallel extension for LINQ queries. Parallel is more resource-friendly than PLINQ, Parallel is better with other processes in the system, and PLINQ tries to get all CPUs to execute the process. The disadvantage of Parallel is that it is too obvious. In many cases, PLINQ's code is more graceful.

1 ienumerable<bool> primalitytest (ienumerable<int> values)2  {3     return values. AsParallel (). Select (val = IsPrime (val)); 4 }

Regardless of which method is chosen, there is a very important criterion in parallel processing as long as the task block is independent of each other, parallelism can be maximized. Once you share state in multiple threads, you must access these states synchronously, so the parallelism of the program becomes worse.

There are several ways to control the output of parallel processing, either by having some concurrent collections of the results, or by aggregating the results. Aggregations are common in parallel processing, and the overloaded methods of the Parallel class also support this map/reduce function.

Here are the tasks in parallel. Data parallelism focuses on data processing, while task parallelism focuses on performing tasks. The Parallel.Invoke method of the Parallel class can perform task parallelism in "Fork/Union" (Fork/join) mode. When the method is called, the delegate (delegate) to be executed in parallel is used as an incoming parameter:

1 voidProcessarray (Double[] array)2 {3     Parallel.Invoke (4() = Processpartialarray (Array,0, array. Length/2),5() = Processpartialarray (array, array. Length/2, Array. Length)6     );7 }8 voidProcesspartialarray (Double[] Array,intBeginintend)9 {Ten    //CPU-intensive operations ... One}

Both data parallelism and task parallelism use a dynamically tuned splitter that splits the task and assigns it to the worker thread. The thread pool increases the number of threads when needed. thread pool threads use work-stealing queues (work-stealing queue). Microsoft has done a lot of optimization to make each part as efficient as possible. To get the best performance of the program, there are many parameters that can be adjusted. As long as the task duration is not particularly short, the default setting will work well.

If the task is too short, it can be expensive to split the data into tasks and to schedule tasks in the thread pool. If the task is too long, the thread pool cannot make an effective dynamic adjustment to achieve a balance of effort. It is difficult to determine the "too short" and "Too long" criteria, depending on the type of problem that the program solves and the performance of the hardware. Based on a common guideline, I will make the task as short as possible (if the task is too short and the performance of the program will suddenly decrease) as long as there is no performance problem. A better approach would be to use the parallel type or PLINQ instead of using the task directly. These advanced forms of parallel processing are self-aligning with algorithms that automatically assign tasks (and are automatically adjusted at run time).

Introduction of multithreaded Programming

A thread is a separate running unit with multiple threads inside each process, each of which can execute instructions concurrently. Each thread has its own stand-alone stack, but shares memory with other threads in the process. For some programs, one of the threads is special, such as a UI thread for a user interface program, and a console program that has a main thread.

Each. NET program has a thread pool that maintains a certain number of worker threads waiting to perform the assigned tasks. The thread pool can monitor the number of threads at any time. There are up to dozens of parameters for configuring the thread pool, but the default setting is recommended, and the thread pool's default settings are carefully adjusted for use in most real-world scenarios.

. NET series: One of concurrent programming "The preliminary Theory of concurrent programming"

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.