[Multithreading] windows Thread Summary

Source: Internet
Author: User

Thread Synchronization is a very big topic, including all aspects of content. In general, thread synchronization can be divided into two categories: User-mode thread synchronization and Kernel Object thread synchronization. In user mode, thread synchronization methods include atomic access and critical section. It features fast synchronization speed and is suitable for scenarios with strict requirements on thread running speed.
Thread Synchronization of kernel objects mainly consists of kernel objects such as events, waiting for timers, semaphores, and signal lights. Because this synchronization mechanism uses kernel objects, threads must be switched from user mode to kernel mode during use. This conversion usually takes nearly CPU cycles, so the synchronization speed is slow, however, the applicability is far better than the thread synchronization mode in the user mode.

Simple programming thread process control means.

  • Critical section: accesses public resources or code segments through multi-thread serialization, which is fast and suitable for controlling data access.
  • Mutex: designed to coordinate separate access to a shared resource.
  • Semaphore: designed to control a limited number of user resources.
  • Event: it is used to notify the thread that some events have occurred and start subsequent tasks.

Critical Section)

A convenient way to ensure that only one thread can access data at a certain time point. Only one thread is allowed to access Shared resources at any time. If multiple threads attempt to access the critical section at the same time, all other threads attempting to access the critical section will be suspended and will continue until the thread enters the critical section. After the critical section is released, other threads can continue to seize it and use the atomic method to share resources.
The critical section contains two operation primitives:

  • EnterCriticalSection () enters the critical section
  • LeaveCriticalSection () leaves the critical section

After the entercriticalsection () Statement is executed, no matter what happens after the Code enters the critical section, make sure that the matching leavecriticalsection () can be executed. Otherwise, the shared resources protected in the critical section will never be released. Although the synchronization speed in the critical section is very fast, it can only be used to synchronize threads in the current process, but not to synchronize threads in multiple processes.

Mutex)

The mutex is similar to that in the critical section. Only threads with mutex objects have the permission to access resources. Because there is only one mutex object, therefore, it is determined that the shared resource will not be accessed by multiple threads at the same time under any circumstances. The thread occupying the resource should hand over the mutex object after the task is processed, so that other threads can access the resource after obtaining it. Mutex is more complex than that in the critical section. Because mutex can not only achieve secure resource sharing in different threads of the same application, but also achieve secure resource sharing among threads of different applications.
The mutex contains several operation primitives:

  • CreateMutex () creates a mutex
  • OpenMutex () opens a mutex
  • ReleaseMutex () releases mutex
  • WaitForSingelObjects () waits for the mutex object
Semaphores)

The method for synchronizing semaphore objects to threads is different from the previous methods. Signals allow multiple threads to use shared resources at the same time, which is the same as PV operations in the operating system. It specifies the maximum number of threads simultaneously accessing shared resources. It allows multiple threads to access the same resource at the same time, but it needs to limit the maximum number of threads that can access the resource at the same time. When using createsemaphore () to create a semaphore, you must specify the maximum allowed resource count and the current available resource count. Generally, the current available resource count is set to the maximum Resource Count. Each time a thread is added to access a shared resource, the current available resource count is reduced by 1, as long as the current available resource count is greater than 0, a semaphore signal can be sent. However, when the current available count is reduced to 0, it indicates that the number of threads currently occupying resources has reached the maximum allowed number, and other threads cannot enter, at this time, the semaphore signal cannot be sent. After processing shared resources, the thread should use the releasesemaphore () function to increase the number of currently available resources by 1 while leaving. The current available resource count cannot exceed the maximum resource count at any time.
PV operations and semaphores are all proposed by Dutch scientist E. W. Dijkstra. Semaphore s is an integer. When S is greater than or equal to zero, the number of resource entities available for concurrent processes in the table. If S is less than zero, it indicates the number of processes waiting to use shared resources.
P operation resource application:
(1) s minus 1;
(2) If S minus 1 is still greater than or equal to zero, the process continues to run;
(3) If S minus 1 is less than zero, the process is blocked and enters the queue corresponding to the signal, and then transferred to the process scheduling.

V operation to release resources:
(1) S plus 1;
(2) If the sum result is greater than zero, the process continues to execute;
(3) If the sum result is less than or equal to zero, a waiting process is awakened from the waiting queue of the signal, and then the original process is returned for further execution or transfer to process scheduling.

The semaphore contains several operation primitives:

  • CreateSemaphore () to create a semaphore
  • OpenSemaphore () opens a semaphore
  • ReleaseSemaphore () Release semaphores
  • WaitForSingleObject () waiting for semaphores

Event)

Event objects can also be synchronized by means of notification operations. In addition, threads in different processes can be synchronized.
The semaphore contains several operation primitives:

  • CreateEvent () to create a semaphore
  • OpenEvent () opens an event
  • SetEvent () reset event
  • WaitForSingleObject () waits for an event
  • WaitForMultipleObjects () waits for multiple events
Summary:

1. The mutex function is very similar to that of the critical zone, but the mutex can be named, that is, it can be used across processes. Therefore, creating mutex requires more resources. Therefore, if you only use it within a process, using the critical section will bring speed advantages and reduce resource occupation. Because the mutex is a cross-process mutex, once created, it can be opened by name.
2. mutex, Semaphore, and Event can all be used by a process to synchronize data. Other objects have nothing to do with data synchronization, but for the process and thread, if the process and thread are in the running status, there is no signal, and there is a signal after exiting. Therefore, you can use WaitForSingleObject to wait for the process and thread to exit.

On the windows platform, it is used to protect the synchronization between multiple threads (including processes). There are basically the following types:
1) Critical Section object 2) Event object 3) Mutex object 4) Semaphore object.

The behavior features discussed below are a general description of the synchronous protection mechanism between concurrent threads and threads. This article uses windows as a typical example. Based on these behavioral features, we will classify the four synchronization objects mentioned in this article. In addition, the four synchronization objects are called "locks" for further discussion.

First, protection and synchronization.

It should be emphasized that protection and synchronization are two different concepts. We often mix these two concepts. Protection refers to the protection of shared resources in a multi-threaded environment. In most cases, a shared resource is a block of memory, which will be accessed and modified by many threads. Synchronization focuses more on the collaboration between threads, and collaboration must be supported by synchronization.
Based on this nature, we can see that the nature of the Critical Section object emphasizes protection, while the Event object, Mutex object, and Semaphore object emphasize synchronization. However, such a difference is only a conceptual difference, which will not affect the program itself.
Second, lock wait timeout

When developing concurrent multi-entry/thread programs, the concept of "Wait For timeout" is introduced to avoid problems such as deadlocks, that is, when a thread needs to obtain a lock to execute some code, it can set a timeout value in the waiting lock. If the lock cannot be obtained within the specified time (timeout value), it can choose to give up the right to execute the code segment, so as to avoid deadlock to a certain extent. This is the basic meaning of lock wait timeout. Based on this row as the feature, we will divide the above four synchronization objects: The critical section object cannot be set to wait for timeout, while the other three objects can be set to wait for timeout. In this regard, when using the critical section object, it is easy to cause a deadlock because the wait timeout cannot be set while waiting to enter the key code segment.
Third, thread lock and process lock

Here, the thread lock refers to the fact that the lock is visible only to all threads of a process, and the process lock refers to the fact that the lock can be accessed by different processes, it can be used for synchronization and mutex between processes. Of course, the process lock can still be used for synchronization and mutex between different threads of the same process. The process lock concept is greater than the thread lock. Based on this feature, the critical section object is a thread lock, and the other three objects are a process lock. This is essentially an analysis. The critical section object is a thread synchronization method in the user mode, and the other three objects are kernel objects. The adaptability of the kernel object mechanism is far better than that of the user mode mechanism. In fact, the only drawback of the kernel object mechanism is that it is slow, because when calling the kernel object mechanism, it must be switched from the user mode to the kernel mode. Such a conversion requires a high price and is a very time-consuming operation. On the X86 platform, this round-trip requires 1000 CPU cycles (this does not include code that runs the kernel mode ). Of course, it should be noted that the use of the critical section object does not mean that the thread will not be in the core State for execution. When a thread tries to enter a key code segment owned by another thread, the thread enters the waiting state. This means that the thread must change from user State to core state. (To improve performance in this aspect, Microsoft has incorporated the concept of loop lock into the critical section object. This thread can choose not to enter the core State wait. For details, see msdn)
Fourth, recursive characteristics of locks

The so-called recursive lock means that when a thread has a synchronization lock, it recursively wants to obtain the lock again. if this operation does not block the execution of the current thread, the lock is called a recursive lock. recursive locks are mainly proposed in the concept of "protection". The locks under the concept of "protection" include Critical Section objects and Mutext objects. these two locks are recursive locks on Windows platforms. Note: The call thread must release the recursive lock several times to obtain the recursive lock.
Fifth, read/write locks

The read/write lock allows efficient concurrent access to shared resources in a multi-threaded environment. For a shared resource, multiple threads can obtain the read lock and read the shared resource in a shared manner. At the same time, only one thread can have a write lock to change the shared resource. This is the concept of read/write locks. Unfortunately, there is no such read/write lock on Windows. You need to implement it on your own.
Summary

I personally think that if you want to study the multi-thread synchronization mechanism in depth, ACE is an excellent teaching material. Here, you will see what is Scoped Lock and how to implement the read/write Lock.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.