Differences between critical section, event, mutex, and semaphores in Windows

Source: Internet
Author: User

Critical Section)

A convenient way to ensure that only one thread can access data at a certain time point. Only one thread is allowed to access Shared resources at any time. If multiple threads attempt to access the critical section at the same time, all other threads attempting to access the critical section will be suspended and will continue until the thread enters the critical section. After the critical section is released, other threads can continue to seize it and use the atomic method to share resources.

The critical section contains two operation primitives: entercriticalsection (). The leavecriticalsection () in the critical section leaves the critical section.

After the entercriticalsection () Statement is executedCodeNo matter what happens after entering the critical section, make sure that the matching leavecriticalsection () can be executed. Otherwise, the shared resources protected in the critical section will never be released. When a critical section is used, it is generally not allowed to run for a long time. As long as the thread entering the critical section has not left, all other threads attempting to enter the critical section will be suspended and enter the waiting state, and will be affected to a certain extent.Program. In particular, do not include operations waiting for user input or other external interventions in the critical section. If you enter the critical section but have not been released, it will also cause other threads to wait for a long time. In other words, no matter what happens after the entercriticalsection () Statement enters the critical section, make sure that the matching leavecriticalsection () can be executed. You can add a structured exception handling code to ensure the execution of the leavecriticalsection () statement. Although the synchronization speed in the critical section is fastIt can only be used to synchronize threads in the current process, but not to synchronize threads in multiple processes
.

MFC provides many functions and complete classes. I use MFC to implement the critical section. MFC provides a class of ccriticalsection for the critical section. It is very easy to use this class for thread synchronization. You only need to use the ccriticalsection class member functions lock () and unlock () in the thread function to calibrate the protected code snippet. Resources used by the code after the lock () are automatically considered to be protected in the critical section. After unlock, other threads can access these resources.

Mutex)

Mutex is a kernel object that is widely used. Multiple Threads can access the same shared resource mutex. Similar to the critical section, only threads with mutex objects have the permission to access resources. Because there is only one mutex object, therefore, it is determined that the shared resource will not be accessed by multiple threads at the same time under any circumstances. The thread occupying the resource should hand over the mutex object after the task is processed, so that other threads can access the resource after obtaining it. Unlike other kernel objects, mutex objects have special code in the operating system and are managed by the operating system, the operating system even allows it to perform unconventional operations that are not allowed by other kernel objects. The mutex is similar to that in the critical section. Only threads with mutex objects have the permission to access resources. Because there is only one mutex object, therefore, it is determined that the shared resource will not be accessed by multiple threads at the same time under any circumstances. The thread occupying the resource should hand over the mutex object after the task is processed, so that other threads can access the resource after obtaining it. Mutex is more complex than that in the critical section. Because mutex can not only achieve secure resource sharing in different threads of the same application, but also achieve secure resource sharing among threads of different applications.

Functions used to maintain thread synchronization with mutex kernel objects include createmutex (), openmutex (), releasemutex (), waitforsingleobject (), and waitformultipleobjects. Before using a mutex object, you must first create or open a mutex object through createmutex () or openmutex. The createmutex () function is prototype:

Handle createmutex (
Lpsecurity_attributes lpmutexattributes, // Security Attribute pointer
Bool binitialowner, // initial owner
Lptstr lpname // mutex object name
);

The binitialowner parameter is used to control the initial State of a mutex object. Generally, it is set to false to indicate that the mutex object is not occupied by any thread during creation. If the object name is specified when a mutex object is created, you can obtain the handle of the mutex object elsewhere in the process or through the openmutex () function of other processes. The openmutex () function is prototype:

Handle openmutex (
DWORD dwdesiredaccess, // access flag
Bool binherithandle, // inheritance flag
Lptstr lpname // mutex object name
);

When a thread that has access to the resource no longer needs to access the resource and needs to exit, it must release its mutex object through the releasemutex () function. Its function prototype is:

Bool releasemutex (handle hmutex );

The unique hmutex parameter is the mutex handle to be released. As for waitforsingleobject () and waitformultipleobjects (), the functions of the wait function in the thread synchronization of mutex objects are basically the same as those in other kernel objects, and they are also waiting for notifications of mutex kernel objects. However, it should be pointed out that the return value of the wait function is no longer the normal wait_object_0 (for the waitforsingleobject () function) when the mutex object notification causes the call to wait for the function to return) or a value between wait_object_0 and wait_object_0 + nCount-1 (for the waitformultipleobjects () function) will return a wait_abandoned_0 (for the waitforsingleobject () function) or a value between wait_abandoned_0 and wait_abandoned_0 + nCount-1 (for the waitformultipleobjects () function ). This indicates that the mutex object that the thread is waiting for is owned by another thread, but this thread has been terminated before sharing resources are used. In addition, the method of using mutex objects is different in the way of waiting for the thread's schedulability. When other kernel objects are not notified, by calling the wait function, the thread will be suspended and out of scheduling, while the mutex method can still be schedulable while waiting, this is one of the Unconventional Operations that mutex objects can perform.

During programming, mutex objects are mostly used to protect the memory blocks accessed by multiple threads, it ensures that any thread has reliable and exclusive access to this memory block when processing it.

The mutex object is expressed in MFC through the cmutex class. The cmutex class method is very simple. When constructing a cmutex class object, you can specify the name of the mutex object to be queried. After the constructor returns, you can access this mutex variable. The cmutex class is also a unique member function that only contains constructors. After the access to the mutex object is completed, you can call the unlock () inherited from the parent class csyncobject () function to release the mutex object. The prototype of the cmutex constructor is:

Cmutex (bool binitiallyown = false, lpctstr lpszname = NULL, lpsecurity_attributes lpsaattribute = NULL );

The applicability and implementation principles of this class are similar to those of the exclusive kernel objects created using the API method, but more concise.

Semaphores)

The method for synchronizing semaphore objects to threads is different from the previous methods. Signals allow multiple threads to use shared resources at the same time, which is the same as PV operations in the operating system. It specifies the maximum number of threads simultaneously accessing shared resources. It allows multiple threads to access the same resource at the same time, but it needs to limit the maximum number of threads that can access the resource at the same time. When using createsemaphore () to create a semaphore, you must specify the maximum allowed resource count and the current available resource count. Generally, the current available resource count is set to the maximum Resource Count. Each time a thread is added to access a shared resource, the current available resource count is reduced by 1, as long as the current available resource count is greater than 0, a semaphore signal can be sent. However, when the current available count is reduced to 0, it indicates that the number of threads currently occupying resources has reached the maximum allowed number, and other threads cannot enter, at this time, the semaphore signal cannot be sent. After processing shared resources, the thread should use the releasesemaphore () function to increase the number of currently available resources by 1 while leaving. The current available resource count cannot exceed the maximum resource count at any time. Semaphores control thread access resources by counting. In fact, semaphores are also called Dijkstra counters.

PV operations and semaphores are all proposed by Dutch scientist E. W. Dijkstra. Semaphore s is an integer. When S is greater than or equal to zero, the number of resource entities available for concurrent processes in the table. If S is less than zero, it indicates the number of processes waiting to use shared resources.

P operation resource application:
(1) s minus 1;
(2) If s minus 1 is still greater than or equal to zero, the process continues to run;
(3) If s minus 1 is less than zero, the process is blocked and enters the queue corresponding to the signal, and then transferred to the process scheduling.

V operation to release resources:
(1) s plus 1;
(2) If the sum result is greater than zero, the process continues to execute;
(3) If the sum result is less than or equal to zero, a waiting process is awakened from the waiting queue of the signal, and then the original process is returned for further execution or transfer to process scheduling.

Functions such as createsemaphore (), opensemaphore (), releasesemaphore (), waitforsingleobject (), and waitformultipleobjects () are used for thread synchronization using semaphores kernel objects. Createsemaphore () is used to create a semaphore kernel object. Its function prototype is:

Handle createsemaphore (
Lpsecurity_attributes l1_maphoreattributes, // Security Attribute pointer
Long linitialcount, // initial count
Long lmaximumcount, // maximum count
Lptstr lpname // Object Name Pointer
);

The lmaximumcount parameter is a signed 32-bit value that defines the maximum allowed Resource Count. The maximum value cannot exceed 4294967295. The lpname parameter can define a name for the created semaphore. because it creates a kernel object, this semaphore can be obtained through this name in other processes. The opensemaphore () function can be used to open the semaphore created in other processes based on the semaphore name. The function prototype is as follows:

Handle opensemaphore (
DWORD dwdesiredaccess, // access flag
Bool binherithandle, // inheritance flag
Lptstr lpname // semaphore name
);

When the thread leaves the processing of shared resources, it must use releasesemaphore () to increase the current available resource count. Otherwise, the actual number of threads currently processing shared resources does not reach the value to be limited, but other threads still cannot enter because the current number of available resources is 0. The function prototype of releasesemaphore () is:

Bool releasesemaphore (
Handle hsemaphore, // semaphore handle
Long lreleasecount, // increase the count
Lplong lppreviouscount // previous count
);

This function adds the value in lreleasecount to the current resource count of the semaphore. Generally, it sets lreleasecount to 1. You can also set other values if needed. Waitforsingleobject () and waitformultipleobjects () are mainly used at the entrance of the thread function trying to enter the shared resource. They are mainly used to determine whether the current available resource count of the semaphore allows the thread to enter. Only when the current available resource count is greater than 0 will the monitored semaphore kernel object be notified.

The usage of semaphores makes it more suitable for synchronization of threads in socket (socket) programs. For example, if the HTTP server on the network needs to limit the number of users who access the same page at the same time, you can set a thread for none of the users to request the page on the server, the page is the shared resource to be protected. By using semaphores to synchronize threads, users can access a page no matter how many users at any time, only threads with the maximum number of users can access this page, while other access attempts are suspended. This page can only be accessed after a user exits.

In MFC, semaphores are expressed through the csemaphore class. This class only has one constructor. You can construct a semaphore object and initialize the initial resource count, maximum Resource Count, Object Name, and Security Attribute. Its prototype is as follows:

Csemaphore (long linitialcount = 1, long lmaxcount = 1, lpctstr pstrname = NULL, lpsecurity_attributes lpsaattributes = NULL );

After a csemaphore class object is constructed, any thread that accesses protected shared resources must inherit the lock () and unlock () obtained from the parent class csyncobject through csemaphore () to access or release the csemaphore object. Similar to the methods described earlier to maintain Thread Synchronization Through the MFC class, the preceding thread synchronization code can also be rewritten through the csemaphore class, the two thread synchronization methods that use semaphores are completely consistent in implementation principle and in implementation results.

Event)

Event objects can also be synchronized by means of notification operations. In addition, threads in different processes can be synchronized.

The semaphore contains several operation primitives:
Createevent () to create a semaphore
Openevent () opens an event
Setevent () reset event
Waitforsingleobject () waits for an event
Waitformultipleobjects () waits for multiple events

When the critical section is used, only threads in the same process can be synchronized. When the event kernel object is used, the threads outside the process can be synchronized on the premise that access to the event object is obtained. You can obtain it through the openevent () function. Its prototype is:

Handle openevent (
DWORD dwdesiredaccess, // access flag
Bool binherithandle, // inheritance flag
Lptstr lpname // pointer to the event object name
);

If the event object has been created (you must specify the event name when creating the event), the function returns the handle of the specified event. For event kernel objects that do not specify the event name when creating an event, you can call createevent () by using the inheritance of the kernel object or calling the duplicatehandle () function () to obtain access to the specified event object. The synchronization operations performed after the access permission is obtained are the same as those performed in the same process.

If you need to wait for multiple events in a thread, use waitformultipleobjects () to wait. Waitformultipleobjects () is similar to waitforsingleobject (), and monitors all handles in the handle array. The handles of these monitored objects have equal priority, and neither of them has higher priority than other handles. The function prototype of waitformultipleobjects () is:

DWORD waitformultipleobjects (
DWORD ncount, // number of pending handles
Const handle * lphandles, // The first address of the handle Array
Bool fwaitall, // wait sign
DWORD dwmilliseconds // wait time interval
);

The ncount parameter specifies the number of kernel objects to wait for. The array of these kernel objects is pointed by lphandles. Fwaitall specifies the two waiting methods for the specified ncount kernel object. If it is true, the function returns only when all objects are notified, if this parameter is set to false, only one of them is notified. Dwmilliseconds serves exactly the same purpose as waitforsingleobject. If the wait times out, the function returns wait_timeout. If a value from wait_object_0 to wait_object_0 + is returned in the nCount-1, it means that the status of all specified objects is notified (when fwaitall is true) or the index of the object to be notified after wait_object_0 is subtracted (when fwaitall is false ). If the returned value is between wait_abandoned_0 and wait_abandoned_0 + nCount-1, it indicates that the status of all specified objects is notified, and at least one of the objects is the discarded mutex object (when fwaitall is true ), or subtract wait_object_0 to indicate the index of a mutex object waiting for normal termination (when fwaitall is false ).

MFC also provides a cevent class for event-related processing. It contains four member functions (pulseevent (), resetevent (), setevent (), and unlock (), except the constructor (). Functions are equivalent to the pulseevent (), resetevent (), setevent (), and closehandle () functions of Win32 APIs. The constructor is responsible for creating the event object of the original createevent () function. Its prototype is:

Cevent (bool binitiallyown = false, bool bmanualreset = false, lpctstr lpszname = NULL, lpsecurity_attributes lpsaattribute = NULL );

According to this default setting, an event object with no name will be created for automatic reset and initial state resetting. The encapsulated cevent class is more convenient to use,

Events can be used to synchronize threads in different processes, and multiple threads can be conveniently prioritized and waited for. For example, multiple waitforsingleobjects can be written to replace waitformultipleobjects to make programming more flexible.

Summary:

1. The mutex function is very similar to that of the critical zone, but the mutex can be named, that is, it can be used across processes. Therefore, creating mutex requires more resources. Therefore, if you only use it within a process, using the critical section will bring speed advantages and reduce resource occupation. Because the mutex is a cross-process mutex, once created, it can be opened by name.

2. mutex, semaphore, and event can all be used by the process to synchronize data. Other objects have nothing to do with the data synchronization operation,For processes and threads, if the processes and threads are in the running state, they are in the non-signal State and there is a signal state after exiting. Therefore, you can use waitforsingleobject to wait for the process and thread to exit.

3. you can specify how resources are exclusive by means of mutex. However, if the following problem occurs, the resource cannot be processed by means of mutex, for example, if a user buys a database system with three concurrent access licenses, the user can decide how many threads/processes can perform database operations at the same time based on the number of access licenses purchased by the user, at this time, if the mutex is used, there is no way to complete this requirement. The traffic signal object can be said to be a resource counter.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.