(RPM) critical section, mutex, Semaphore, event difference (thread synchronization) (turn) critical section, mutex, Semaphore, event difference (thread synchronization). Category: C + + Windows core programming 2012-04-10 14:55 3321 people Read reviews (0) Collection report sem Aphoremfcnulleventsthreadhttp server four processes or threads synchronous Mutex control Method 1, critical section: Through the serialization of multithreading to access public resources or a piece of code, fast, suitable for controlling data access. 2. Mutex: Designed to coordinate separate access to a shared resource together. 3. Semaphore: Designed to control a limited number of user resources. 4. Event: Used to notify the thread that some events have occurred, thus initiating the start of the successor task. Critical sections (Critical section) ensure that only one thread at a time can access the data in an easy way. Only one thread is allowed to access the shared resource at any time. If more than one thread attempts to access the critical section at the same time, all other threads that attempt to access this critical section after one thread has entered will be suspended and continue until the thread entering the critical section leaves. When the critical section is released, other threads can continue to preempt, and in this way achieve the purpose of atomic manipulation of shared resources. The critical section contains two operations primitives: EnterCriticalSection () enters the critical section leavecriticalsection () leaves the critical section of the EnterCriticalSection () statement after the code enters the critical section, whatever happens, It is important to ensure that the matching leavecriticalsection () can be executed. Otherwise, the shared resources protected by the critical section will never be freed. Although critical section synchronization is fast, it can only be used to synchronize threads within this process and not to synchronize threads in multiple processes. MFC provides a lot of fully functional classes, I use MFC to implement the critical section. MFC provides a CCriticalSection class for critical sections, and it is very simple to use this class for thread synchronization. Just use the CCriticalSection class member function lock () and unlock () in the thread function to demarcate the protected code fragment. The resources used by the Lock () are automatically considered to be protected in the critical zone. These resources can be accessed by other threads after unlock. Mutex (mutex) mutex is similar to the critical section, only the line friend that owns the mutex has permission to access the resource, because there is only one mutex object, so it is determined that the shared resource will not be accessed by multiple threads at the same time in any case. The thread that currently occupies the resource should hand over the owning mutex after the task has been processed.So that other threads can access the resources after they are acquired. The mutex is more complex than the critical section. Because using mutexes not only enables the secure sharing of resources in different threads of the same application, but also enables secure sharing of resources between threads of different applications. The mutex contains several operations primitives: CreateMutex () creates a mutex OpenMutex () opens a mutex ReleaseMutex () frees the Mutex WaitForMultipleObjects () waits for the mutex object Similarly, MFC provides a CMutex class for mutexes. Using the CMutex class to implement mutex operations is straightforward, but pay particular attention to calls to CMutex's constructors CMutex (BOOL Binitiallyown = FALSE, lpctstr lpszName = NULL, lpsecurity_ ATTRIBUTES Lpsaattribute = NULL) No parameters can not be filled out, random fill will appear some unexpected results. Semaphore (semaphores) Semaphore objects synchronize threads differently than in the previous methods, and the signal allows multiple threads to use shared resources at the same time as the PV operation in the operating system. It indicates the maximum number of threads concurrently accessing the shared resource. It allows multiple threads to access the same resource at the same time, but needs to limit the maximum number of threads that access this resource at the same time. When you create a semaphore with CreateSemaphore (), you indicate both the maximum allowable resource count and the current available resource count. In general, the current available resource count is set to the maximum resource count, and each additional thread accesses the shared resource, the current available resource count is reduced by 1, and the semaphore signal can be emitted as long as the current available resource count is greater than 0. However, the current available count is reduced to 0 o'clock indicating that the number of threads currently occupying the resource has reached the maximum allowable number, and the semaphore signal will not be able to be emitted when other threads are allowed to enter. After the thread has finished processing the shared resource, the current available resource count should be added by 1 at the same time as the ReleaseSemaphore () function is left. At any time the currently available resource count is never greater than the maximum resource count. The concept of PV operation and signal volume is presented by Dutch scientist E.w.dijkstra. The semaphore S is an integer, s greater than or equals zero represents the number of resource entities available to the concurrent process, but s less than zero indicates the number of processes waiting to use the shared resource. P Operation Request Resources: (1) s minus 1, (2) if s minus 1 is still greater than or equal to zero, then the process continues to execute, (3) if s minus 1 is less than 0, then the process is blocked into the queue corresponding to the signal, and then transferred to the process scheduling. V Operations Release Resources:(1) s plus 1; (2) If the sum result is greater than 0, the process continues; (3) If the sum result is less than or equal to zero, a wait process is awakened from the waiting queue of the signal and then returned to the original process to continue or transfer to the process schedule. The semaphore contains several operations primitives: CreateSemaphore () creates a semaphore OpenSemaphore () opens a semaphore ReleaseSemaphore () releases the semaphore WaitForSingleObject () waits for the semaphore The event event object can also maintain thread synchronization by notifying the operation. And can be implemented in different processes of thread synchronization operations. The semaphore contains several operations primitives: CreateEvent () creates a semaphore openevent () opens an event SetEvent () resets the event WaitForSingleObject () waits for an event WaitForMultipleObjects () waits for multiple events WaitForMultipleObjects function prototype: WaitForMultipleObjects (in DWORD ncount,//wait handle number in CONST HANDLE *lphandles,//pointer to the array in BOOL bWaitAll,//whether to fully wait for the flag in DWORD dwmilliseconds//wait time) parameter ncount specifies the number of kernel objects to wait for, storing this The array of kernel objects is pointed to by Lphandles. The fWaitAll specifies two wait modes for the specified Ncount kernel object, True when all objects are notified and the function returns, False if any of them are notified. The role of dwmilliseconds here is exactly the same as in the WaitForSingleObject (). If the wait timeout occurs, the function returns WAIT_TIMEOUT. Summary: 1. The mutex is very similar to the critical section, but the mutex can be named, which means it can be used across processes. So creating mutexes requires more resources, so using a critical section just to be used within a process can bring speed advantages and reduce resource usage. Because the mutex is a cross-process mutex once created, it can be opened by name. 2. Mutexes (mutexes), semaphores (Semaphore), events can be used across processes to synchronize data operations, while other objects are not related to data synchronization operations, butFor processes and threads, if the process and thread are running in a state that is not signaled, there is a signaled state after exiting. So you can use WaitForSingleObject to wait for processes and threads to exit. 3. The mutex can be used to specify that the resource is exclusive, but if one of the following cases can not be handled by mutual exclusion, for example, now a user buys a three concurrent Access License database system, depending on the number of access licenses purchased by the user to determine how many threads/processes can simultaneously perform database operations, At this time, if the use of mutual exclusion is no way to complete this requirement, the Beacon object can be said to be a resource counter. 1. The event uses events to synchronize threads is the most resilient. There are two states of an event: the firing state and the non-excited state. Also known as signal status and no signal status. There are two types of events: Manual reset events and automatic reset events. When a manual reset event is set to the firing state, all waiting threads are awakened, and remain in the firing state until the program resets it to the non-fired state. When the auto-reset event is set to the firing state, the "one" waiting thread is awakened, and then automatically reverts to the non-fired state. Therefore, it is ideal to synchronize two threads with an auto-reset event. The corresponding class in MFC is CEvent. The CEvent constructor creates an auto-reset event by default and is not in the fired state. There are three functions to change the state of an event: Setevent,resetevent and PulseEvent. It is a good idea to synchronize threads with events, but it is important to note that the use of SetEvent and pulseevent for auto-reset events can cause deadlocks and must be taken care of. 2. Critical section The first advice to use a critical area is not to lock a resource for long periods of time. The long time here is relative, depending on the program. For some control software, it may be a few milliseconds, but for some other programs it can take up to a few minutes. However, after entering the critical section, the resources must be released as soon as possible. What happens if I don't release it? The answer is not how. If it is the main thread (GUI thread) to enter a non-released critical section, hehe, the program will be hung! A disadvantage of a critical region is that the Critical section is not a core object, and it is not known that the thread entering the critical section is alive or dead, and if the thread entering the critical section is hung, the critical resource is not released, the system cannot be informed, and there is no way to release the critical resource. This drawback is remedied in the mutex (mutex). The corresponding implementation class of Critical section in MFC is CCriticalSection. Ccriticalsection::lock () Enter the critical section, CcriticalseCtion::unlock () leaves the critical section. 3, mutex mutex function and critical area is very similar. The difference is that the mutex takes more time than the critical section, but the mutex is the core object (Event, Semaphore also), can be used across processes, and waits for a locked mutex to set a timeout, It will not be as critical section does not know the situation of the critical area, and has been death. The corresponding class in MFC is CMutex. The WIN32 function has: Create a mutex CreateMutex (), open the Mutex OpenMutex (), release the Mutex ReleaseMutex (). The ownership of a mutex does not belong to the thread that generated it, but to the last thread that waits for the mutex (WaitForSingleObject, and so on) and has not yet performed the ReleaseMutex () operation. Threads have mutexes as if they had entered the critical section, only one thread at a time can have the mutex. If a thread that owns a mutex does not call ReleaseMutex () before returning, the mutex is discarded, but when the other thread waits for the mutex (WaitForSingleObject, etc.), it can still return and get a wait_ Abandoned_0 the return value. Being able to know that a mutex is discarded is unique to the mutex. 4, the semaphore signal volume is the most historical synchronization mechanism. Semaphore is a key factor to solve the producer/consumer problem. The corresponding MFC class is CSemaphore. The Win32 function CreateSemaphore () is used to generate the semaphore. ReleaseSemaphore () is used to unlock the lock. The present value of semaphore represents the number of resources currently available, and if the present value of semaphore is 1, there is a lock action that can be successful. If the present value is 5, then there are five locking actions that can be successful. When you call wait ... Such functions require locking, if the semaphore present value is not 0,wait ... Return immediately, the number of resources minus 1. When the number of ReleaseSemaphore () resources is called plus 1, then the total number of resources initially set is not exceeded. 4 processes or threads synchronous mutex control method I would like to tidy up my understanding of the process thread synchronization mutex. It happened that the students who had just returned to school in Children's Day were invited to dinner. In the process of eating, there are two students, in order to a problem argument of the red. One thinks. NET, the process threading control model is more reasonable. One thinks that the thread pool strategy under Java is better than. Net.Let's talk about it. Switch to the process thread synchronization mutex control problem. Back home, think about the writing of this stuff. Now the popular process thread synchronization mutex control mechanism, in fact, is the most primitive and basic 4 methods of implementation. The combination of these 4 methods is optimized. NET and Java under the flexible, programming simple thread process control means. These 4 methods are defined as follows in the "Operating system Tutorial" ISBN 7-5053-6193-7 can be found in a more detailed explanation of the 1 critical area: Through the serialization of multithreading to access public resources or a piece of code, fast, suitable for controlling data access. 2 Mutex: Designed to coordinate separate access to a shared resource together. 3 semaphore: Designed to control a limited number of user resources. 4 event: Used to inform the thread that some events have occurred, thus initiating the start of the successor. Critical sections (Critical section) ensure that only one thread at a time can access the data in an easy way. Only one thread is allowed to access the shared resource at any time. If more than one thread attempts to access the critical section at the same time, all other threads that attempt to access this critical section after one thread has entered will be suspended and continue until the thread entering the critical section leaves. When the critical section is released, other threads can continue to preempt, and in this way achieve the purpose of atomic manipulation of shared resources. The critical section contains two operations primitives: EnterCriticalSection () enters the critical section leavecriticalsection () leaves the critical section entercriticalsection () statement after the code enters the critical section, no matter what happens , you must ensure that the matching leavecriticalsection () can be executed. Otherwise, the shared resources protected by the critical section will never be freed. Although critical section synchronization is fast, it can only be used to synchronize threads within this process and not to synchronize threads in multiple processes. MFC provides a lot of fully functional classes, I use MFC to implement the critical section. MFC provides a CCriticalSection class for critical sections, and it is very simple to use this class for thread synchronization. Just use the CCriticalSection class member function lock () and unlock () in the thread function to demarcate the protected code fragment. The resources used by the Lock () are automatically considered to be protected in the critical zone. These resources can be accessed by other threads after unlock. CriticalSection CCriticalSection global_criticalsection; Shared resourcesChar global_array[256]; Initialize shared resource void InitializeArray () {for (int i = 0;i<256;i++) {global_array[i]=i; }}//write thread UINT global_threadwrite (LPVOID pparam) {CEdit *ptr= (CEdit *) Pparam; Ptr->setwindowtext (""); Enter the critical section global_criticalsection.lock (); for (int i = 0;i<256;i++) {global_array[i]=w; Ptr->setwindowtext (Global_array); Sleep (10); }//Leave the critical section global_criticalsection.unlock (); return 0; }//Remove thread UINT global_threaddelete (LPVOID pparam) {CEdit *ptr= (CEdit *) Pparam; Ptr->setwindowtext (""); Enter the critical section global_criticalsection.lock (); for (int i = 0;i<256;i++) {global_array[i]=d; Ptr->setwindowtext (Global_array); Sleep (10); }//Leave the critical section global_criticalsection.unlock (); return 0; }//create thread and start thread void Ccriticalsectionsdlg::onbnclickedbuttonlock () {//start The first thread CWinThread *ptrwrite = A Fxbeginthread (Global_threadwrite, &m_write, Thread_priority_normal, 0, create_suspended); Ptrwrite->resumethread (); StaRT the second Thread CWinThread *ptrdelete = AfxBeginThread (Global_threaddelete, &m_delete, Thread_priority_normal, 0, create_suspended); Ptrdelete->resumethread (); In the test program, the Lock unlock two buttons are implemented, respectively, in the critical section to protect the execution state of shared resources, and no critical section to protect the execution state of shared resources. Program run result mutex (mutex) mutex is similar to the critical section, only the line friend with the mutex object has permission to access the resource, because there is only one mutex object, so it is decided that the shared resource will not be accessed by multiple threads at the same time in any case. The thread that currently occupies the resource should hand over the owning mutex after the task has been processed so that other threads can access the resource after it is acquired. The mutex is more complex than the critical section. Because using mutexes not only enables the secure sharing of resources in different threads of the same application, but also enables secure sharing of resources between threads of different applications. The mutex contains several operations primitives: CreateMutex () creates a mutex OpenMutex () opens a mutex ReleaseMutex () frees the Mutex WaitForMultipleObjects () waits for the mutex object Similarly, MFC provides a CMutex class for mutexes. Using the CMutex class to implement mutex operations is straightforward, but pay particular attention to calls to CMutex's constructors CMutex (BOOL Binitiallyown = FALSE, lpctstr lpszName = NULL, Lpsecurity_att Ributes Lpsaattribute = NULL) No parameters can not be filled out, random fill will appear some unexpected results. Create Mutex CMutex Global_mutex (0,0,0); Shared resources Char global_array[256]; void InitializeArray () {for (int i = 0;i<256;i++) {global_array[i]=i; }} UINT Global_threadwrite (LPVOID PParAM) {CEdit *ptr= (CEdit *) Pparam; Ptr->setwindowtext (""); Global_mutex.lock (); for (int i = 0;i<256;i++) {global_array[i]=w; Ptr->setwindowtext (Global_array); Sleep (10); } global_mutex.unlock (); return 0; } UINT Global_threaddelete (LPVoid pparam) {CEdit *ptr= (CEdit *) Pparam; Ptr->setwindowtext (""); Global_mutex.lock (); for (int i = 0;i<256;i++) {global_array[i]=d; Ptr->setwindowtext (Global_array); Sleep (10); } global_mutex.unlock (); return 0; Also in the test program, the Lock unlock two buttons are implemented, respectively, in a mutex to protect the execution state of the shared resource, and no mutex to protect the execution state of the shared resource. Program run result Semaphore (semaphores) Semaphore objects synchronize threads differently from the previous methods, which allows multiple threads to use shared resources at the same time as the PV operation in the operating system. It indicates the maximum number of threads concurrently accessing the shared resource. It allows multiple threads to access the same resource at the same time, but needs to limit the maximum number of threads that access this resource at the same time. When you create a semaphore with CreateSemaphore (), you indicate both the maximum allowable resource count and the current available resource count. In general, the current available resource count is set to the maximum resource count, and each additional thread accesses the shared resource, the current available resource count is reduced by 1, and the semaphore signal can be emitted as long as the current available resource count is greater than 0. However, the current available count is reduced to 0 o'clock indicating that the number of threads currently occupying the resource has reached the maximum allowable number, and the semaphore signal will not be able to be emitted when other threads are allowed to enter. After the thread has finished processing the shared resource, the current available resource count should be added by 1 at the same time as the ReleaseSemaphore () function is left. At any time the currently available resource count is never greater than the maximum resource count. PV operation and LetterThe concept of the number is presented by Dutch scientist E.w.dijkstra. The semaphore S is an integer, s greater than or equals zero represents the number of resource entities available to the concurrent process, but s less than zero indicates the number of processes waiting to use the shared resource. P Operation Request Resources: (1) s minus 1, (2) if s minus 1 is still greater than or equal to zero, then the process continues to execute, (3) if s minus 1 is less than 0, then the process is blocked into the queue corresponding to the signal, and then transferred to the process scheduling. V Operation Release Resources: (1) s plus 1, (2) If the sum result is greater than 0, then the process continues to execute; (3) If the sum result is less than or equal to zero, a wait process is awakened from the waiting queue of the signal and then returned to the original process to resume execution or transfer to the process schedule. The semaphore contains several operations primitives: CreateSemaphore () creates a semaphore OpenSemaphore () opens a semaphore ReleaseSemaphore () releases the semaphore WaitForSingleObject () Wait for semaphore//semaphore handle HANDLE Global_semephore; Shared resources Char global_array[256]; void InitializeArray () {for (int i = 0;i<256;i++) {global_array[i]=i; }}//thread 1 UINT global_threadone (lpvoid pparam) {CEdit *ptr= (CEdit *) Pparam; Ptr->setwindowtext (""); Wait for a shared resource request to be passed equals p operation WaitForSingleObject (Global_semephore, INFINITE); for (int i = 0;i<256;i++) {global_array[i]=o; Ptr->setwindowtext (Global_array); Sleep (10); }//Release shared resource equals V operation ReleaseSemaphore (Global_semephore, 1, NULL); return 0; } UINT Global_threadtwo (LPVoid pparam) {CEdit *ptr= (CEdit *)Pparam; Ptr->setwindowtext (""); WaitForSingleObject (Global_semephore, INFINITE); for (int i = 0;i<256;i++) {global_array[i]=t; Ptr->setwindowtext (Global_array); Sleep (10); } releasesemaphore (Global_semephore, 1, NULL); return 0; } UINT Global_threadthree (LPVoid pparam) {CEdit *ptr= (CEdit *) Pparam; Ptr->setwindowtext (""); WaitForSingleObject (Global_semephore, INFINITE); for (int i = 0;i<256;i++) {global_array[i]=h; Ptr->setwindowtext (Global_array); Sleep (10); } releasesemaphore (Global_semephore, 1, NULL); return 0; } void Csemaphoredlg::onbnclickedbuttonone () {//Set Semaphore 1 resource 1 at the same time only one thread can access global_semephore= CreateSemaphore (NULL, 1, 1, NULL); This->startthread (); Todo:add your control notification handler code here} void Csemaphoredlg::onbnclickedbuttontwo () {//Set Semaphore 2 Resources 2 Only two threads can access global_semephore= createsemaphore (NULL, 2, 2, NULL) at a time; This->startthread (); Todo:add your control notification handler code here} void Csemaphoredlg::onbnclickedbuttonthree () {///Set Semaphore 3 Resource 3 only three threads can access global_semephore= CreateSemaphore (NULL, 3, 3, NULL); This->startthread (); The use of Todo:add your control notification handler code here} is used to make it more suitable for synchronizing the threads of a socket (socket) program. For example, an HTTP server on a network that restricts the number of users accessing the same page at the same time can set up a thread for each user's page request to the server, and the page is a shared resource to be protected. By using semaphores to synchronize threads, you can ensure that no matter how many users access a page at any one time, only a thread that is not larger than the set maximum number of users is able to access it, while the other attempts to access it are suspended and can only be entered after a user exits access to the page. The program run Result event event object can also maintain thread synchronization by notifying the operation. And can be implemented in different processes of thread synchronization operations. The semaphore contains several operations primitives: CreateEvent () creates a semaphore openevent () opens an event SetEvent () resets the event WaitForSingleObject () waits for an event WaitFor Multipleobjects () waits for multiple events WaitForMultipleObjects function prototype: WaitForMultipleObjects (in DWORD ncount,// Wait handle number in CONST HANDLE *lphandles,//pointer to handle array in BOOL bWaitAll,//whether to fully wait for flag in DWORD dwmilliseconds//wait Time The parameter ncount specifies the number of kernel objects to wait for, and the array that holds the kernel objects is pointed to by Lphandles. The fWaitAll specifies two wait modes for the specified Ncount kernel object, True when all objects are notified and the function returnsBack, false to return if any of them are notified. The role of dwmilliseconds here is exactly the same as in the WaitForSingleObject (). If the wait timeout occurs, the function returns WAIT_TIMEOUT. Event array HANDLE global_events[2]; Shared resources Char global_array[256]; void InitializeArray () {for (int i = 0;i<256;i++) {global_array[i]=i; }} UINT Global_threadone (LPVoid pparam) {CEdit *ptr= (CEdit *) Pparam; Ptr->setwindowtext (""); for (int i = 0;i<256;i++) {global_array[i]=o; Ptr->setwindowtext (Global_array); Sleep (10); }//Reset event SetEvent (Global_events[0]); return 0; } UINT Global_threadtwo (LPVoid pparam) {CEdit *ptr= (CEdit *) Pparam; Ptr->setwindowtext (""); for (int i = 0;i<256;i++) {global_array[i]=t; Ptr->setwindowtext (Global_array); Sleep (10); }//Reset event SetEvent (global_events[1]); return 0; } UINT Global_threadthree (LPVoid pparam) {CEdit *ptr= (CEdit *) Pparam; Ptr->setwindowtext (""); Wait for two events to be reset WaitForMultipleObjects (2, Global_events, True, INFINITE); for (int i = 0;i<256;i++) {global_array[i]=h;Ptr->setwindowtext (Global_array); Sleep (10); } return 0; } void Ceventdlg::onbnclickedbuttonstart () {for (int i = 0; i < 2; i++) {//Instantiation event Global_events[i]=createevent (N Ull,false,false,null); } CWinThread *ptrone = AfxBeginThread (Global_threadone, &m_one, Thread_priority_normal, 0, create_suspended); Ptrone->resumethread (); Start the second Thread CWinThread *ptrtwo = AfxBeginThread (Global_threadtwo, &m_two, Thread_priority_normal, 0, create_suspended); Ptrtwo->resumethread (); Start the third Thread CWinThread *ptrthree = AfxBeginThread (Global_threadthree, &m_three, Thread_priority_normal , 0, create_suspended); Ptrthree->resumethread (); Todo:add your control notification handler code here} events enable thread synchronization in different processes, and it is easy to implement priority comparison waits for multiple threads, such as writing multiple Waitforsingl Eobject instead of waitformultipleobjects to make programming more flexible. Summary of program running results: 1. The mutex is very similar to the critical section, but the mutex can be named, which means it can be used across processes. So creating mutexes requires more resources, so using a critical section just to be used within a process can bring speed advantages and reduce resource usage. BecauseA mutex is a cross-process mutex that, once created, can be opened by name. 2. Mutexes (mutexes), semaphores (Semaphore), events can be used across processes to synchronize data operations, while other objects are not related to data synchronization operations, but for processes and threads, if the process and thread are running in a state that is not signaled, signaled state after exiting. So you can use WaitForSingleObject to wait for processes and threads to exit. 3. The mutex can be used to specify that the resource is exclusive, but if one of the following cases can not be handled by mutual exclusion, for example, now a user buys a three concurrent Access License database system, depending on the number of access licenses purchased by the user to determine how many threads/processes can simultaneously perform database operations, At this time, if the use of mutual exclusion is no way to complete this requirement, the Beacon object can be said to be a resource counter. Question: There are two types of semaphores on Linux. The first class is the SVR4 (System V Release 4) version of the semaphore defined by the Semget/semop/semctl API. The second class is the POSIX interface defined by Sem_init/sem_wait/sem_post/interfaces. They have the same functionality, but the interfaces are different. In the 2.4.x kernel, the semaphore data structure is defined as (include/asm/semaphore.h). However, there is no specific reference to the mutex in Linux, only to see that the mutex is a special case of the semaphore, when the maximum number of resources of the semaphore = 1 The number of threads that can access the shared resource = 1 is the mutex. The definition of critical section is also vague. No information was found on the operation of the event handler thread/process synchronization mutex. In Linux under the standard C + + code compiled with gcc/g++, semaphore operation almost as in Windows VC7 programming, without changing the number of smooth migration, but mutex, event, critical section of the Linux migration did not succeed. All cases in this document are compiled under windowsxp SP2 + VC7 The best way to synchronize all threads in a process is through a critical section, which is not system-level, but process-level, which means that he may use some of the flags in the process to ensure thread synchronization within the process. According to Richter, it is a counting cycle; The critical section can only be used within the same process; The critical section can only wait indefinitely, but 2k increases the tryentercriticalsection function to achieve 0 time waits. Mutual exclusion is to ensure the synchronization of threads between multiple processes, he uses the system kernel objects to ensure synchronization. Because the system kernel object can be a name, multipleThis name-based kernel object can be used between processes to ensure thread safety of system resources. The mutex is the Win32 kernel object, which is managed by the operating system, and the mutex can use WaitForSingleObject to achieve infinite wait, 0 time waits and any time waits. 1. Critical Zone Critical Zone is one of the most direct thread synchronization methods. The so-called critical section is a piece of code that can be executed by only one thread at a time. If the code that initializes the array is placed in the critical section, the other thread will not be executed until the first thread has finished processing. Before you can use a critical section, you must use the initializecriticalsection () procedure to initialize it. After the first thread calls EnterCriticalSection (), all other threads are no longer able to enter the code block. The next thread waits for the first thread to call LeaveCriticalSection () before it wakes up. 2. Mutual exclusion is very similar to a critical section, except for two key differences: First, mutexes can be used to synchronize threads across processes. Second, the mutex can be given a string name, and an additional handle to the existing mutex is created by referencing the name. Tip: The biggest difference between a critical section and an event object, such as a mutex, is performance. When there is no thread conflict, the critical section uses 10 ~ 15 time slices, and the event object has to use 400~600 time slices because of the system kernel involved. When a mutex is no longer owned by a thread, it is signaled. The thread that first calls the WaitForSingleObject () function becomes the owner of the mutex, and the mutex is set to no signaled state. When a thread calls the ReleaseMutex () function and passes a handle to a mutex as a parameter, the owning relationship is lifted and the mutex is re-entered in the signaled state. You can call the function CreateMutex () to create a mutex. When the mutex object is used, you should call CloseHandle () to close it. 3. Semaphores another technique for synchronizing threads is to use semaphore objects. It is built on the basis of mutual exclusion, but the semaphore increases the function of the resource count, and a predetermined number of threads allow simultaneous access to the code to be synchronized. You can use CreateSemaphore () to create a semaphore object, because only one thread is allowed to enter the code to be synchronized, so the maximum semaphore count value (lMaximumCount) is set to 1. The ReleaseSemaphore () function will add 1 to the semaphore object's count; Remember, finally, make sure to call the CloseHandle () function to release the handle to the semaphore object created by CreateSemaphore (). Return value of the ★★★waitforsingleobject function: wait_aThe object specified by Bandoned is a mutex object, and the thread that owns the mutex terminates before the object is disposed of. At this point, the mutex object is called discarded. In this case, the mutex is owned by the current thread, and it is set to a non-signaled state, the object specified by Wait_object_0 is signaled, the time of wait_timeout wait is over, and the object is still non-signaled state;
Critical section, mutex, Semaphore, event difference (thread synchronization)