5 days of playing C # parallel and multithreaded programming--the fifth day multithreading programming big summary

Source: Internet
Author: User

5 days to go to C # Parallel and multithreaded Programming series articles Directory

5 days of playing C # parallel and multithreaded programming--the first day to meet parallel

5 days of playing C # parallel and multithreaded programming--next day parallel collections and PLINQ

5 days to play C # parallel and multithreaded programming--third day recognize and use task

5 days to play C # parallel and multithreaded programming--fourth day task advanced

first, the problem caused by multithreading

1, deadlock problem

Before we learned how to use task, where the wait mechanism of task lets us fall in love with it, but what happens if we call the Task.waitall method to wait for all the threads, if there is a task that has never returned? Of course, if we do not do it, the program will always wait, then because of the deadlock of this task, cause other tasks can not be properly submitted, the entire program "dead" there. Let's write a code to look at the deadlock situation:

varT1 = Task.Factory.StartNew (() ={Console.WriteLine ("Task 1 Start running ...");  while(true) {System.Threading.Thread.Sleep ( +); } Console.WriteLine ("Task 1 finished!");            }); varT2 = Task.Factory.StartNew (() ={Console.WriteLine ("Task 2 Start running ..."); System.Threading.Thread.Sleep ( -); Console.WriteLine ("Task 2 finished!");            }); Task.waitall (T1,T2);

Here we create two task,t1 and t2,t1 inside there is a while loop, because the condition is always true, so he can never quit. Run the program with the following results:

Can see Task2 completed, is slow to wait for Task1, this time we press ENTER is no reaction, unless the window is closed. If we encounter this situation in the project is very tangled, because we do not know what happened, the program is to stop there, do not error, and do not continue to execute.

So what do we do with this? We can set the maximum waiting time, if the wait time, no longer wait, let us modify the code, set the maximum wait time of 5 seconds (the project can be set according to the actual situation), if more than 5 seconds to output which task error, the code is as follows:

task[] Tasks =Newtask[2]; tasks[0] = Task.Factory.StartNew (() ={Console.WriteLine ("Task 1 Start running ...");  while(true) {System.Threading.Thread.Sleep ( +); } Console.WriteLine ("Task 1 finished!");           }); tasks[1] = Task.Factory.StartNew (() ={Console.WriteLine ("Task 2 Start running ..."); System.Threading.Thread.Sleep ( -); Console.WriteLine ("Task 2 finished!");                        }); Task.waitall (Tasks, the);  for(inti =0; I < tasks. length;i++ )            {                if(Tasks[i]. Status! =taskstatus.rantocompletion) {Console.WriteLine ("Task {0} error!", i +1); }} console.read ();

Here we put all the tasks into an array to manage, call Task.waitall an overloaded method, the first parameter is task[] data, the second parameter is the maximum wait time, in milliseconds, here we set to 5000 and wait 5 seconds, continue to execute downward. Let's walk through the task array, determine which tasks are not completed by the Status property, and then output the error message.

2, SpinLock (Spin lock)

When we first came across multi-threaded or multi-tasking, we had to think of a synchronization method that used lock or monitor, but after 4.0 Microsoft offered us another tool--spinlock, which has a smaller performance overhead than the heavyweight monitor. Its usage is very similar to monitor, VS gives the following hints:

Let's write an example to see if the code is as follows (the use of lock and monitor is no longer elaborate, there are many online information, you can see):

SpinLock Slock =NewSpinLock (false); LongSUM1 =0; Longsum2 =0; Parallel.For (0,100000, i ={sum1+=i;            }); Parallel.For (0,100000, i =            {                BOOLLockTaken =false; Try{Slock. Enter (refLockTaken); Sum2+=i; }                finally                {                    if(LockTaken) slock. Exit (false);            }            }); Console.WriteLine ("the value of Num1 is: {0}", SUM1); Console.WriteLine ("the value of Num2 is: {0}", sum2); Console.read ();

Output results

Here we use the Parallel.For method to do the demonstration, Parallel.For is convenient to use, but in the actual development still try to use less, because it is too high controllability, a little simple rough feeling, may bring some unnecessary "trouble", it is best to use task, Because task can be controlled better.

Slock. Enter method, explained as follows:

3. Data synchronization between multiple threads

Multithreading between the synchronization, in the use of thread, we often have lock and monitor, the above just introduced. Net4.0 a new lock--spinlock (Spin lock), in fact, we can also divide the task into several blocks, executed together by multiple threads, and finally merge the results of multiple threads, such as: 1 to 100 and we divide 10 threads, respectively, to seek 1~10,......,90~100 and , and then merge the results of 10 threads. There is also the use of thread-safe collections to participate in the next day's article. In fact, task synchronization mechanism has been very good, if there is a special business needs, wired synchronization problem, we can exchange ~ ~

ii. the choice between task and thread pool

We have to say that the knowledge of the task is almost the same, and then we began to understand the theory of "thread pool" and "task" between the relationship, we have to know it, but also know why. Regardless of the thread or the task, we are inevitably talking about the pool, but after. NET 4.0, the thread pool engine considers the future extensibility and has taken full advantage of the multi-core microprocessor architecture, so long as possible, we should use task, rather than thread pooling.

Here is a brief analysis of the CLR thread pool, in fact, there is a thread pools called "Global queue" concept, each time we use the use of QueueUserWorkItem will produce a "work item", and then "work items" into the "global queue" to queue, The worker threads in the last thread pool are taken out as FIFO (first Input first Output), and it is worth mentioning that the "global queue" has a lock-free algorithm after. NET 4.0, which has greatly improved the performance bottleneck compared to the previous version locking of the "global queue". The thread pool of a task delegate has not only a "global queue", but also a "local queue" for each worker thread. Our first reaction must be "local queue" what good is it? For the time being, let's take a look at the task assignments in a thread pool, such as:

The thread pool works roughly as follows, the thread pool's minimum number of threads is 6, threads are executing tasks on the other, when there is a new task, a new thread is requested to the thread pool, the thread pool allocates the idle thread, and when the thread is low, the thread pool creates new threads to perform the task. Until the thread pool reaches the maximum number of threads (line Cheng). In general, only a task is assigned a thread to execute, and when the FIFO is very frequent, it can cause significant thread management overhead.

Let's take a look at what is done in the task, when we new a task "work item" will go into the "global queue", if our task executes very fast, then "global Queue" will be FIFO very often, then what is the way to alleviate it? When our task is in a nested scenario, the "local queue" is going to have an effect, such as we have 3 tasks in a task, then these 3 tasks will exist in the "local queue", such as task one, there are three missions to execute, that is, the so-called "local queue", When the thread of task three finishes executing, it will "steal" the task from a queue of tasks in a FIFO form , thus reducing the overhead of thread management. This is equivalent to, there are two people, a person to do all the work assigned to themselves, while another person has a lot of work, busy people should take over the work of busy people, together quickly completed.

From the above, we see that these shunts and loads are not available to ordinary threadpool.queueuserworkitem, so after. NET 4.0, we use TPL as much as possible, discarding threadpool.

This is the last 5 days of the C # parallel and multithreaded Programming series, of course, there are many things did not say, if you really want to play multi-threading, still need to learn a lot. Everyone in the process of learning what problems can be exchanged ~ ~

If you feel that my blog is helpful to everyone, please recommend supporting one, give me the motivation to write.

Cloud drizzling

Blog Address: http://www.cnblogs.com/yunfeifei/

Disclaimer: The original text of this blog only represents my work in a certain time to summarize the views or conclusions, and my unit does not have a direct interest in the relationship. Non-commercial, unauthorized, post please keep the status quo, reprint must retain this paragraph statement, and in the article page obvious location to the original connection.

5 days of playing C # parallel and multithreaded programming--the fifth day multithreading programming big summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.