. NET interview question analysis (07)-multi-thread programming and thread synchronization,. net multi-thread programming

Source: Internet
Author: User

. NET interview question analysis (07)-multi-thread programming and thread synchronization,. net multi-thread programming
Directory address of the series of articles:. NET interview questions (00)-start with interview & Index of series of articles

There are many thread knowledge points, such as multi-thread programming, thread context, asynchronous programming, thread synchronization construction, and cross-thread GUI access, this article is just a simple introduction to thread-related knowledge from the perspective of common interview questions (which is also commonly used in the development process. If you want the system to learn multiple threads, There is no shortcut, and do not be lazy, it is better to read professional books.

Common interview questions:

1. What is the difference between a thread and a process?

2. Why does the GUI not support cross-thread access control? How can this problem be solved?

3. What is the difference between the background thread and the foreground thread?

4. What kind of lock is a commonly used lock?

5. Why should I lock a parameter? Can I lock a value type? What are the requirements for this parameter?

6. What are the relationships and differences between multithreading and Asynchronization?

7. What are the advantages of the thread pool? What are the shortcomings?

8. What is the difference between Mutex and lock? Which one is used as the lock for better use?

9. Will the following code call the DeadLockTest (20) method cause a deadlock? And describe the reasons.

Public void DeadLockTest (int I) {lock (this) // or lock a static object variable {if (I> 10) {Console. writeLine (I --); DeadLockTest (I );}}}

10. Implement Singleton in Singleton mode with double check locks.

11. What is the output result of the following code? Why? How to improve her?

int a = 0;System.Threading.Tasks.Parallel.For(0, 100000, (i) =>{    a++; });Console.Write(a);
Basic thread is the basic unit for operating system scheduling.. A thread is scheduled and executed by the operating system. Its basic status is as follows.

In addition, the CLR thread corresponds directly to a Windows Thread.

I still remember that the core computing resources of computers are the CPU core and CPU register, which is the main battlefield for thread operation. In the operating system, there are so many threads (generally there are thousands of threads, most of which are in sleep state). For Single-core CPUs, only one thread can be scheduled for execution at a time. How can we allocate multiple threads? Windows uses the time polling mechanism. CPU computing resources are allocated to the execution thread in the form of time slice (about 30 ms.

Computing chicken resources (CPU core and CPU register) can only schedule one thread at a time. The specific scheduling process is as follows:

  • Save the data in the CPU register to the current thread (thread context and other places) and give the next thread a place;
  • Thread Scheduling: extract a thread to be executed from the thread set;
  • Load the context data of the new thread to the CPU register;
  • The new thread executes and enjoys her own CPU time slice (about 30 ms). After the execution, she continues to return to the first step and continues the cycle;

The above thread scheduling process is a thread switching, which involves the migration of data such as the thread context. The performance overhead is very high. Therefore, threads cannot be abused, and thread creation and consumption are expensive. This is also the main reason why we recommend that you use the thread pool whenever possible.

It's too easy to use Thread, so we won't repeat it here,Summarize the main performance impact of threads.:

  • Thread creation and destruction are expensive;
  • Thread context switching has a great performance overhead. Of course, if the new thread to be scheduled is the same as the current thread, thread context switching is not required and the efficiency is much faster;
  • Note that when GC executes collection, it must first (safely) suspend all threads, traverse all thread stacks (Root), and update the root address of all threads after GC collection, resume the thread call. The more threads there are, the more GC jobs are required;

Of course, with the development of hardware, more and more CPU cores, multithreading technology can greatly improve the efficiency of applications. However, the basic principle of the thread must also be properly utilized by the multi-thread technology. Then, based on actual needs, pay attention to the related resource environments, such as disk I/O and network conditions.


The use of a single thread is skipped here, which is too easy. The above summarizes the many shortcomings of the thread, So Microsoft provides a variety of technologies for multi-threaded programming, such as thread pool, task, parallel, and so on.

ThreadPool. QueueUserWorkItem (t => Console. WriteLine ("Hello thread pool "));

Each CLR has a thread pool. The thread pool can be shared by multiple AppDomains In the CLR. The thread pool is a set of threads managed within the CLR, and there is no thread at first, it is created only when necessary. The main structure of the thread pool is shown in. The basic process is as follows:

  • The thread pool maintains a request queue to cache the code tasks to be executed for user requests, that is, the requests submitted by ThreadPool. QueueUserWorkItem;
  • After a new task is available, the thread pool uses Idle threads or new threads to execute queue requests;
  • After the task is executed, the thread will not be destroyed and will be reused;
  • The thread pool is responsible for maintaining the creation and destruction of threads. When there are a large number of Idle threads in the thread pool, the thread pool automatically ends some redundant threads to release resources;

The thread pool has a capacity. Because it is a pool, you can set the maximum number of active threads in the thread pool. You can call ThreadPool. SetMaxThreads to set relevant parameters. However, in many programming practices, programmers are not recommended to set these parameters. In fact, Microsoft has made a lot of Optimizations to improve the Thread Pool performance, the thread pool can intelligently determine whether to create or consume a thread. In most cases, the thread pool can meet the requirements.

The thread pool enables the thread to be fully and effectively utilized, reducing the delay of task startup and eliminating the need to create a large number of threads. This avoids the huge impact on the performance caused by the creation and destruction of a large number of threads.

After learning about the basic principles and advantages of the thread, if you are a thoughtful ape, you may easily find many questions, such as adding tasks to the thread pool queue, how can I cancel or suspend it? How can I know that she has finished the execution? Below we will summarize the shortcomings of the thread pool:

  • The threads in the thread pool do not support operations such as thread suspension and cancellation. If you want to cancel tasks in the thread ,.. NET supports collaborative cancellation, which is convenient to use and does not meet the needs in some scenarios;
  • Tasks in the thread do not return values and do not know when the execution is completed;
  • It does not support setting the thread priority. It also does not support other similar requirements that require more thread control;

Therefore, Microsoft provides us with another thing called Task to supplement some shortcomings in the thread pool.

// Create a Task <int> t1 = new Task <int> (n => {System. threading. thread. sleep (1000); return (int) n ;}, 1000); // customizes a continuation task Scheduler t1.ContinueWith (task => {Console. writeLine ("end" + t1.Result);}, TaskContinuationOptions. attachedToParent); t1.Start (); // use Task. factory creates and starts a task var t2 = System. threading. tasks. task. factory. startNew () => {Console. writeLine ("t1:" + t1.Status) ;}); Task. waitAll (); Console. writeLine (t1.Result );

In Parallel, the Task object is actually used internally (TPL will create an instance of System. Threading. Tasks. Task in the internal), and all the Parallel Tasks will be returned after completion. We recommend that you do not use Parallel for a small number of short-time tasks. Parallel also has performance overhead. In addition, you need to schedule Parallel tasks and create Delegation for calling methods.

This is a problem many users may encounter when developing C/S client applications. GUI interface controls cannot be accessed across threads. If interface controls are accessed in other threads, an exception will be thrown during running, as shown in the following figure. Are you familiar with this! The culprit is the "GUI thread processing model ".

Because Windows is based on the message mechanism, all keyboard and mouse operations on the UI are sent to various applications in the form of messages. There is a message queue in the GUI thread. The GUI thread continuously processes these messages cyclically and updates the UI Based on the messages. If you want the GUI thread to process a time-consuming operation (for example, it takes 10 seconds to download a file), then the GUI thread cannot process the message queue, the UI is in the suspended state.

// 1. Winform: Invoke method and BeginInvoke this. label. Invoke (method, null); // 2.WPF: Dispatcher. Invoke this. label. Dispatcher. Invoke (method, null );

② Use the BackgroundWorker provided in. NET to perform time-consuming computing operations, and update the UI control in RunWorkerCompleted.

using (BackgroundWorker bw = new BackgroundWorker()){    bw.RunWorkerCompleted += new RunWorkerCompletedEventHandler((ojb,arg) =>    {        this.label.Text = "anidng";    });    bw.RunWorkerAsync();}

③ Method that looks very high: Use the GUI thread to process the synchronous context of the model to send the UI control modification operation, so that you do not need to call the UI control element

. NET provides a synchronization context class SynchronizationContext, which can be used to link the application model to its thread processing model. In fact, its essence is still the first step of calling..

The implementation code is divided into three steps. The first step is to define a static class for UI element access encapsulation of GUI threads:

Public static class GUIThreadHelper {public static System. threading. synchronizationContext GUISyncContext {get {return _ GUISyncContext;} set {_ GUISyncContext = value ;}} private static System. threading. synchronizationContext _ GUISyncContext = System. threading. synchronizationContext. current; /// <summary> /// it is mainly used for Synchronous callback of GUI threads // </summary> /// <param name = "callback"> </param> public static void SyncContextCallback (Action callback) {if (callback = null) return; if (GUISyncContext = null) {callback (); return;} GUISyncContext. post (result => callback (), null );} /// <summary> /// synchronous callback of GUI threads supporting the APM asynchronous programming model /// </summary> public static AsyncCallback SyncContextCallback (AsyncCallback callback) {if (callback = null) return callback; if (GUISyncContext = null) return callback; return asynresult => GUISyncContext. post (result => callback (result as IAsyncResult), asynresult );}}

Step 2: register the current SynchronizationContext in the main window:

public partial class MainWindow : Window    {        public MainWindow()        {            InitializeComponent();            CLRTest.ConsoleTest.GUIThreadHelper.GUISyncContext = System.Threading.SynchronizationContext.Current;        }

Step 3: Use it. You can use it anywhere.

GUIThreadHelper.SyncContextCallback(() =>{    this.txtMessage.Text = res.ToString();    this.btnTest.Content = "DoTest";    this.btnTest.IsEnabled = true;});


Thread Synchronization Construction

Thread Synchronization is a very common and important issue in multi-threaded programming. Mastering thread synchronization plays a vital role in the correct use of critical resources and thread performance! The basic idea is very simple, that is, locking, adding a lock at the door of the critical resource to control the access of multiple threads to the critical resource. However, in actual development, there are multiple lock methods or control mechanisms (primitiveUserSchema construction and metadataKernelMode structure ).. NET provides two construction modes for thread synchronization. You need to understand the basic principles and usage methods.

The synchronization construction of primitive threads is divided into: primitiveUserSchema construction and metadataKernelMode constructor. The two synchronous constructor modes have their own advantages and disadvantages, while the hybrid Constructor (such as lock) combines the advantages of the two constructor modes.

Thread 1 Requests critical resources and uses user mode constructed locks at the resource gate;
  • When thread 2 Requests critical resources, it finds a lock, so it waits at the door and keeps asking whether the resources are available;
  • If thread 1 uses resources for a long time, thread 2 will continue to run and occupy CPU time. What does CPU usage do? She will keep polling the lock status until the resource is available, which is called a live lock;
  • Have you found any disadvantages? Thread 2 will always use the CPU time (if the current system only has these two threads running ),This means that not only CPU time is wasted, but also frequent thread context switching, which seriously affects the performance..

    Of course, her advantage is high efficiency, which kind of thread synchronization is suitable for short resource occupation .. NET provides two atomic operations for us. Atomic operations can be used to implement some simple user mode locks (such as spin locks ).

    System. Threading. Interlocked: Easy to lose structure. It executes atomic reads on a variable that contains a simple data type.OrWrite operation.

    Thread. VolatileRead and Thread. VolatileWrite: Mutual lock structure, which executes atomic read on a variable that contains a simple data typeAndWrite operation.

    The specific meanings of the above two atomic operations are described in detail here (if you are interested, you can study the reference books or materials provided at the end of the article). For question 11, let's take a look at the question code:

    int a = 0;System.Threading.Tasks.Parallel.For(0, 100000, (i) =>{    a++; });Console.Write(a);

    The above code updates the value of shared variable a through parallel (multi-thread), and the result must be less than or equal to 100000. The specific amount is unstable. Solution: we can use our commonly used Lock, and more effective is to useSystem. Threading. InterlockedThe provided atomic operation ensures that each operation on the value of a is atomic:

    System. Threading. Interlocked. Add (ref a, 1); // correct

    The following figure shows a simple performance verification test that uses Interlocked, no lock, and lock. The result of no locks is 95. The answer is definitely not what you want. The other two results are correct, but the performance difference is great.

    System. threading. tasks. parallel. for (0,100, (I) =>{ lock (_ obj) {a ++; // The Thread is incorrect. sleep (20 );}});Thread 1 Requests critical resources and uses the locks constructed in kernel mode at the resource gate;

  • When thread 2 Requests critical resources and finds a lock, the system will require sleep (blocking), thread 2 will not be executed, and it will not waste CPU and thread context switching;
  • Wait for thread 1 to use the resource. After unlocking, a notification will be sent, and then the operating system will wake up thread 2. If there are multiple threads waiting at the critical resource gate, a wake-up will be selected;
  • Looks pretty good! It completely solves the disadvantages of user mode construction, but the kernel mode also has its disadvantages: Switching threads from user mode to kernel mode (or vice versa) leads to huge performance loss. The call thread converts the managed code to the kernel code and returns it, which wastes a lot of CPU time and is accompanied by thread context switching, therefore, try not to switch the thread from user mode to kernel mode.

    Her advantage is to block threads without wasting CPU time. It is suitable for thread synchronization that requires a long time to occupy resources.

    There are two main methods to construct the kernel mode and the common locks based on these two methods:

    • Event-based: Such as AutoResetEvent and ManualResetEvent
    • Semaphores: Such as Semaphore
    Since both the kernel mode and the user mode have advantages and disadvantages, the hybrid structure combines the two to take full advantage of the advantages of the two to minimize the performance loss. The general idea is very understandable, that is, if there is no resource competition, or the thread uses resources for a short time, it is to construct synchronization in user mode, otherwise it will be upgraded to kernel mode to construct synchronization, the most typical example is Lock.

    There are still many common mixed locks! Such as SemaphoreSlim, ManualResetEventSlim, Monitor, and ReadWriteLockSlim. These locks have their own characteristics and use cases. Here we will mainly use the most lock to learn more.

    The essence of lock is the use of Monitor. lock is only a simplified syntax form. The actual syntax form is as follows:

    bool lockTaken = false;try{    Monitor.Enter(obj, ref lockTaken);    //...}finally{    if (lockTaken) Monitor.Exit(obj);}

    What is the lock or Monitor object? Note that this object is the key to the lock. Before that, you need to review the synchronized index block (AsynBlockIndex) of the referenced object ), this is one of the standard configurations of the referenced object mentioned in the previous article (another is the type object pointer TypeHandle). Its role is here.

    Synchronous Index block is. NET, which allocates a synchronization index for the objects in each heap (that is, the reference type object instance). It is actually an address pointer, the initial value is-1 and does not point to any address.

    • Create a Lock Object obj. the synchronized index block (address) of obj is-1 and does not point to any address;
    • Monitor. enter (obj), create or use an idle Synchronous Index block (such as synchronization Block 1), (image source), this is the real Synchronous Index block, its internal structure is a mixed lock structure, including thread ID, recursive counting, waiting thread statistics, kernel objects, and so on, similar to a mixed lock AnotherHybridLock. The obj object (AsynBlockIndex) points to the synchronization Block 1;
    • At Exit, the value is reset to-1, and the synchronized index Block 1 can be reused;

    First, we should try to avoid thread synchronization. No matter what method we use, there will be no small performance loss. Generally, most locks are used. This Lock is comprehensive and applicable to most scenarios. When the performance requirement is high, or you can select a more compliant Lock Based on Different use cases.

    When using Lock, the key point is the Lock Object. Pay attention to the following aspects:

    • This object must be of the reference type. What is the value type unavailable? The value type can be boxed! Do you think so? However, do not use the value type. Because the objects after the value type is packed multiple times are different, the lock may fail;
    • Do not lock this. Try to use an meaningless Object to lock it;
    • Do not lock a type object because the type object is global;
    • Do not lock a string because the string may reside, and different character objects may point to the same string;
    • Do not use [System. Runtime. CompilerServices. MethodImpl (MethodImplOptions. Synchronized)]. You can use this method to ensure that the method can only be called by one thread at a time. In essence, she uses lock. If it is an instance method, it will lock this. If it is a static method, it will lock the type object;


    Answer: 1. Describe the difference between a thread and a process?
    • An application instance is a process. A process contains one or more threads, and threads are part of the process;
    • Processes are independent of each other. They have their own private memory space and resources. The threads in the process can share all the resources of the process they belong;
    2. Why does the GUI not support cross-thread access control? How can this problem be solved?

    The GUI application introduces a special thread processing model. To ensure thread security of the UI control, this thread processing model does not allow other sub-threads to access the UI elements across threads. There are many solutions, such:

    • Using the methods provided by the UI control, Winform is the control's Invoke method, while WPF is the control's Dispatcher. Invoke method;
    • Use BackgroundWorker;
    • Use the synchronous context SynchronizationContext of the GUI thread processing model to submit the UI update operation.

    The preceding methods are described in detail.

    3. What is the difference between the background thread and the foreground thread?

    The application can exit only after running all the foreground threads, or actively end the foreground threads. The application ends no matter whether or not the background threads are still running. For background threads, the application can exit directly regardless of whether it has been run. All background threads automatically end when the application exits.

    By setting Thread. IsBackground to true, you can specify the Thread as a background Thread. The main Thread is a foreground Thread.

    4. What kind of lock is a commonly used lock?

    For example, SemaphoreSlim, ManualResetEventSlim, Monitor, and ReadWriteLockSlim. lock is a hybrid lock. Its essence is Monitor ['m running n minutes t he].

    5. Why should I lock a parameter? Can I lock a value type? What are the requirements for this parameter?

    The lock object must be a reference type. She can lock the value type, but the value type will be boxed. The objects after each packing are different, leading to invalid locking.

    For the lock, the locked object parameter is the key. The Synchronous Index block pointer of this parameter will point to a real lock (synchronization block), and this lock (synchronization block) will be reused.

    6. What are the relationships and differences between multithreading and Asynchronization?

    Multithreading is one of the main ways to implement Asynchronization. Asynchronization is not equivalent to multithreading. There are many other ways to implement Asynchronization, such as using hardware features, using processes or fibers. In. NET has a lot of asynchronous programming support. For example, many places have the Begin ***, End *** method, which is a kind of asynchronous programming support, some of her internal systems use multithreading, and some use hardware features to implement asynchronous programming.

    7. What are the advantages of the thread pool? What are the shortcomings?

    Advantages: reduces the overhead of thread creation and destruction, and can reuse threads; also reduces the performance loss of thread context switching; During GC collection, A small number of threads are more conducive to GC collection efficiency.

    Disadvantages: the thread pool cannot have more precise control over a thread, such as understanding its running status; cannot set the thread priority; tasks (methods) added to the thread pool) there is no return value; it is not suitable for thread pools for tasks that require long-term running.

    8. What is the difference between Mutex and lock? Which one is used as the lock for better use?

    Mutex is a kernel-based Mutex Lock that supports recursive calls of locks, while Lock is a hybrid lock. It is generally recommended to use Lock for better performance.

    9. Will the following code call the DeadLockTest (20) method cause a deadlock? And describe the reasons.

    Public void DeadLockTest (int I) {lock (this) // or lock a static object variable {if (I> 10) {Console. writeLine (I --); DeadLockTest (I );}}}

    No, because lock is a mixed lock and supports recursive call of the lock. If you use a ManualResetEvent or AutoResetEvent, a deadlock may occur.

    10. Implement Singleton in Singleton mode with double check locks.
    Public static class Singleton <T> where T: class, new () {private static T _ Instance; private static object _ lockObj = new object (); /// <summary> /// obtain the Instance of the singleton object /// </summary> public static T GetInstance () {if (_ Instance! = Null) return _ Instance; lock (_ lockObj) {if (_ Instance = null) {var temp = Activator. createInstance <T> (); System. threading. interlocked. exchange (ref _ Instance, temp) ;}} return _ Instance ;}}
    11. What is the output result of the following code? Why? How to improve her?
    int a = 0;System.Threading.Tasks.Parallel.For(0, 100000, (i) =>{    a++; });Console.Write(a);

    The output result is unstable. The value is less than or equal to 100000. Because of multi-threaded access, there is no lock mechanism, which may lead to loss of updates. The specific causes and improvements are provided in detail.


    Copyright, Article Source:Http://www.cnblogs.com/anding

    My personal abilities are limited. The content of this article is only for study and discussion. You are welcome to correct and exchange ideas.

    Analysis of. NET interview questions (00)-the first part is about interview & series of article Indexes


    Books: CLR via C #

    Books: You must know. NET

    . Net basics (5) multithreading development Basics

    Summary: several methods of C # Thread Synchronization

    C # parallel programming-related concepts

    Multi-thread journey 7-GUI thread model, message delivery (post) and processing (before IOS development)

    C # Learn from the story: thread (1)

    Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.