Thread Synchronization Summary

Source: Internet
Author: User
Tags function prototype semaphore

Original link: http://bbs.chinaunix.net/thread-4093341-1-1.html

Sync---criticalsection,mutex,event,semaphores
synchronization objects about threads can be divided into kernel objects and non-kernel objects, the biggest difference being that kernel objects can span processes, and non-kernel objects cannot span processes, but can only synchronize threads in a single process.
Kernel object: (Non-kernel object: criticalsection)
1. Process, Processe
2. Threading, Threads
3. Documents, Files
4 console inputs, console input
5. Document change notification, file changes notifications
6. Mutex amount, mutexes
7. Signal volume, semaphores
8. Events
9. Waiting Timer waitable timers
10.Jobs
each of these types of objects can be in one of two states: a signal (signaled) and no signal (nonsignaled). Available is a signal state, is occupied is no signal state. For example, when processes and threads are terminated, their kernel objects become signaled, and their kernel objects are not signaled when they are created and running.
Kernel Object synchronization application:
1. When a thread obtains a handle to a process's kernel object, it can change the process priority, get the process exit code, synchronize this thread with the end of a process, and so on.
2. When a thread's kernel object handle is obtained, it can change the running state of the thread, synchronize with the end of the thread, and so on.
3. When a file handle is obtained, this thread can be synchronized with an I/O operation for an asynchronous file, and so on.
4. The console input object can be used to make the thread wake up when input is entered to perform related tasks.
5. Other kernel objects--File change notifications, mutexes, semaphores, events, timers, etc.--are only available for synchronizing objects

here is a detailed description of common synchronization objects:
critiaclsection:
A critical section is an easy way to ensure that only one thread can access the data at a time. Only one thread is allowed to access the shared resource at any time, and if more than one thread attempts to access the critical section at the same time, all other threads that attempt to access this critical section after one thread enters will be suspended and continue until the thread that enters the critical section leaves. When the critical section is released, other threads can continue to preempt, and in this way achieve the purpose of atomic manipulation of shared resources.
in all synchronization objects, the critical section is the easiest to use, but it can only be used to synchronize threads in a single process, and it is not a kernel object, it is not managed by the low-level parts of the operating system, and cannot be manipulated with a handle, because it is not a kernel object, making it a lightweight synchronization mechanism with faster synchronization.
steps to use:
1. Create a critical section in the process that allocates a critical_section data structure in the process, and the allocation of the critical section structure must be global so that the different threads of the process can access it. For an in-depth analysis of the critical_section structure, see the article <<break free of Code deadlocks in Critical Sections under windows>>
2. Before using the critical section to synchronize threads, you must call InitializeCriticalSection to initialize the critical section. You need to initialize the resource only once before releasing it.
3.VOID entercriticalsection: Blocking function. When the calling thread cannot get ownership of the specified critical section, the thread will sleep, and the system will not allocate the CPU to it until it is awakened. Or try to enter the critical section using the TryEnterCriticalSection method, and if it succeeds, the caller thread obtains access to the critical section, otherwise the return fails.
4. Perform tasks within the critical area.
5.BOOL leavecriticalsection: Non-blocking function. Reduce the reference count of the current thread to the specified critical section by 1, and another thread waiting for this critical section will be awakened when the usage count becomes zero.
6. When you do not need to reuse the critical section, use DeleteCriticalSection to release the resources needed for the critical section. After this function is executed, you can no longer use entercriticalsection and leavecriticalsection unless the critical section is initialized with InitializeCriticalSection again.
Precautions :
1. The critical section allows only one thread to access at a time, and each thread must call the critical zone flag (that is, a critical_section global variable) before attempting to manipulate the critical zone data entercriticalsection. Other threads that want access are put to sleep, and the system stops allocating CPU time slices to them until they are awakened. In other words, a critical section can be owned by only one thread, and of course, when no thread calls EnterCriticalSection or TryEnterCriticalSection, the critical section does not belong to any one thread.
2. When a thread with critical section ownership calls LeaveCriticalSection to abandon ownership, the system wakes up only one thread in the waiting queue, gives it ownership, and other threads continue to wait.
3. Note that the thread that owns the critical section will succeed each time the EnterCriticalSection call is made against this critical section (this means that a duplicate call is also returned immediately, that is, a nested call is supported), and the critical section flag (that is, a Critical_ The reference count of the section global variable is increased by 1. Before another thread can own the critical section, the thread that owns it must call leavecriticalsection enough times to have the critical section if the reference count drops to zero. In other words, in a thread that normally uses a critical section, Calsection and leavecriticalsection should be used in pairs.
4.TryEnterCriticalSectionBOOL tryentercriticalsection (lpcritical_section lpcriticalsection); From the function declaration, you can see that The return value of the EnterCriticalSection function is void, and here is bool. It is visible that the call to TryEnterCriticalSection requires us to determine its return value. When calling TryEnterCriticalSection, if the specified critical section is not owned by any thread (or has not yet been called by any calling thread), the function gives access to the critical section to the calling thread and returns true, but if the critical section is already owned by another thread, It immediately returns a value of false. The biggest difference between tryentercriticalsection and EnterCriticalSection is that tryentercriticalsection never suspends a thread.

Mutex: (Mutex object contains one usage quantity, one thread ID and one reference counter)
when two or more threads need to access a shared resource at the same time, the system needs to use a synchronization mechanism to ensure that only one thread is using the resource at a time. A mutex grants exclusive access to a shared resource to only one thread. If a thread acquires a mutex, the second thread to get the mutex is suspended until the first thread releases the mutex.
The mutex object differs from all other kernel objects in that it is owned by the thread. All other synchronization objects have either a signal or no signal, that's all. The mutex object, in addition to recording the current signal state, also remembers that the thread owns it at this time. If a thread is terminated after it obtains a mutex object (which is set to no signal state), the mutex is discarded. In this case, the mutex will always remain signaled, because no other thread can release it by calling ReleaseMutex. When the system discovers that this is the case, it automatically sets the mutex back to the signaled state. (Set thread ID to zero, reference count 0) Other threads waiting for the semaphore will be awakened, but the return value of the function is wait_abandoned instead of the normal wait_object_0. At this point, other threads can know whether the mutex is being released normally by waiting for the return value.
mutexes are similar to critical_section. The thread that owns the mutex, each call to WaitForSingleObject, returns immediately, but the usage count of the mutex increases, and the same is called ReleaseMutex multiple times to make the reference count 0, which can be used by other threads.
Q: Does the system reset its state back when other kernel objects are terminated without releasing all of the threads?
A: RESET, but no mark, as normal release, that is, will not have the mutex of this return wait_abandoned attribute.
Note: The thread owns a kernel object and the thread has ownership of a kernel object, which is different. When the thread has a kernel object, it is emphasized that when the thread terminates, if the thread has access to the kernel object, the kernel object is discarded because its signal state cannot be reset, and the thread has the right to use one of the kernel objects, which means that the thread can invoke some functions. Access the Kernel object or perform certain operations on the kernel object.
the functions that can be used to maintain thread synchronization with mutually exclusive kernel objects are mainly CreateMutex (), OpenMutex (), ReleaseMutex (), WaitForSingleObject () and WaitForMultipleObjects (). Before using a mutex object, you first create or open a mutex object through CreateMutex () or OpenMutex (). The CreateMutex () function prototype is:
HANDLE CreateMutex (lpsecurity_attributes lpmutexattributes,//security attribute pointer BOOL Binitialowner,//Initial Owner LPCTSTR lpname//Mutex object name);
The parameter binitialowner is primarily used to control the initial state of the mutex object. It is generally set to false to indicate that the mutex was not occupied by any thread when it was created. If you specify an object name when creating a mutex, you can get a handle to the mutex in other parts of the process or in other processes through the OpenMutex () function.         OpenMutex () function prototype is: HANDLE OpenMutex (DWORD dwdesiredaccess,//Access flag BOOL bInheritHandle,//Inherit flag LPCTSTR lpname//Mutex object name); When a thread that currently has access to a resource no longer needs access to the resource and is leaving, the mutex object it owns must be freed through the ReleaseMutex () function, with the function prototype: BOOL ReleaseMutex (HANDLE Hmutex),       Its only argument Hmutex is the handle of the mutex to be released. However, it should be noted here that the return value of the wait function is no longer the usual wait_object_0 (for the WaitForSingleObject () function) or wait_object_0 to Wait_ when the mutex notifies the call to wait for the function to return. A value between object_0+ncount-1 (for the WaitForMultipleObjects () function), but instead returns a Wait_abandoned_0 (for WaitForSingleObject () function) or a value between Wait_abandoned_0 to Wait_abandoned_0+ncount-1 (for the WaitForMultipleObjects () function).      This indicates that the mutex that the thread is waiting for is owned by another thread, and the thread has terminated before it has finished using the shared resource. ,

Event : (Divided into auto-reset events and manual reset events)
   The event object can also maintain thread synchronization by notifying the operation. The main functions are: CreateEvent (), OpenEvent (), SetEvent (), ResetEvent (), WaitForSingleObject (), and WaitForMultipleObjects ().          using a critical section can only synchronize threads in the same process, while using an event kernel object can synchronize out-of-process threads, provided that access to this event object is obtained. Can be obtained through the openevent () function, whose function prototype is: HANDLE openevent (DWORD dwdesiredaccess,//Access flag BOOL binherithandle,//Inheritance flag LPCTSTR lpname Pointer to event object name;         If the event object is created, the function returns a handle to the specified event. For event kernel objects that do not specify an event name when creating an event, you can call CreateEvent () by using the kernel object's inheritance or by calling the DuplicateHandle () function to gain access to the specified event object. The synchronization operation that takes place after gaining access is the same as the thread synchronization performed in the same process.          If you need to wait for multiple events in one thread, wait with WaitForMultipleObjects (). WaitForMultipleObjects () is similar to WaitForSingleObject (), while monitoring all handles in the handle array. The handles of these monitored objects have equal precedence, and no one handle can have a higher precedence than the other handles. The function prototype for WaitForMultipleObjects () is: DWORD WaitForMultipleObjects (    DWORD ncount,//wait handle number CONST HANDLE * Lphandles,//handle array first address BOOL fWaitAll,//wait flag DWORD dwmilliseconds//wait interval);         Parameters Ncoun t specifies the internal check to waitThe number of icons that hold these kernel objects is pointed to by Lphandles. The fWaitAll specifies two wait modes for the specified Ncount kernel object, True when all objects are notified and the function returns, False if any of them are notified. The role of dwmilliseconds here is exactly the same as in the WaitForSingleObject (). If the wait timeout occurs, the function returns WAIT_TIMEOUT. If you return a value from Wait_object_0 to Wait_object_0+ncount-1, the state of all specified objects is the state of the notification (when fWaitAll is true) or the return value is subtracted from the wait_object_ 0 You can get the index of the object that the notification occurred (when fWaitAll is false). If the return value is between Wait_abandoned_0 and Wait_abandoned_0+ncount-1, then the state of all the specified objects is notified, and at least one of the objects is a discarded mutex (when fWaitAll is true). or subtract wait_object_0 with a return value to get the index of a mutex that is waiting for a normal end (when fWaitAll is false).

semaphores: (allows multiple threads to access one resource at a time)
Semaphore objects synchronize threads differently than in the previous methods, and signals allow multiple threads to use shared resources at the same time. It allows multiple threads to access the same resource at the same time, but needs to limit the maximum number of threads that access this resource at the same time. When you create a semaphore with CreateSemaphore (), you indicate both the maximum allowable resource count and the current available resource count. In general, the current available resource count is set to the maximum resource count, and each additional thread accesses the shared resource, the current available resource count is reduced by 1, and the semaphore signal can be emitted as long as the current available resource count is greater than 0. However, the current available count is reduced to 0 o'clock indicating that the number of threads currently occupying the resource has reached the maximum allowable number, and the semaphore signal will not be able to be emitted when other threads are allowed to enter. After the thread has finished processing the shared resource, the current available resource count should be added by 1 at the same time as the ReleaseSemaphore () function is left. At any time the currently available resource count is never greater than the maximum resource count.         The semaphore is controlled by the count to the thread access resource, and indeed the semaphore is actually called the Dijkstra counter. The semaphore kernel object thread synchronization is mainly used in CreateSemaphore (), OpenSemaphore (), ReleaseSemaphore (), Functions such as WaitForSingleObject () and WaitForMultipleObjects ().
CreateSemaphore () is used to create a semaphore kernel object whose function prototype is: HANDLE CreateSemaphore (lpsecurity_attributes lpsemaphoreattributes,//         Security attribute pointer long lInitialCount,//Initial count long lMaximumCount,//MAX Count LPCTSTR lpname//object name pointer); The parameter lmaximumcount is a signed 32-bit value that defines the maximum allowable resource count, and the maximum value cannot exceed 4294967295. The lpname parameter defines a name for the semaphore that is created, because it creates a kernel object that can be obtained by that name in other processes.
The OpenSemaphore () function can be used to open semaphores created in other processes based on the semaphore name, the function prototype is as follows: HANDLE OpenSemaphore (DWORD dwdesiredaccess,//Access flag BOOL Binher        Ithandle,//Inheritance flag LPCTSTR lpname//signal volume name); When a thread leaves processing of a shared resource, the current available resource count must be increased by ReleaseSemaphore ().                Otherwise, the actual number of threads that are currently working on the shared resource does not reach the value to limit, while other threads are still inaccessible because the current available resource count is 0. The function prototype for ReleaseSemaphore () is: BOOL ReleaseSemaphore (HANDLE hsemaphore,//semaphore handle LONG lReleaseCount,//Count increment Lplong Lpprevio        Uscount//previous count); This function adds the value in lReleaseCount to the current resource count for semaphores, typically sets lReleaseCount to 1, and optionally sets additional values if needed.
The use of semaphores makes it more suitable for synchronizing the threads of a socket (socket) program. For example, an HTTP server on a network that restricts the number of users accessing the same page at the same time can set up a thread for each user's page request to the server, and the page is a shared resource to be protected. By using semaphores to synchronize threads, you can ensure that no matter how many users access a page at any one time, only a thread that is not larger than the set maximum number of users is able to access it, while the other attempts to access it are suspended and can only be entered after a user exits access to the page.
Summary:
the mutex is very similar to the critical section, but the mutex can be named, which means it can be used across processes. So creating mutexes requires more resources, so using a critical section just to be used within a process can bring speed advantages and reduce resource usage. Because the mutex is a cross-process mutex once created, it can be opened by name.
mutexes (mutexes), semaphores (Semaphore), events can be used across processes to synchronize data operations, while other objects are not related to data synchronization operations, but for processes and threads, if the process and thread are running in a state that is not signaled, signaled state after exiting. So you can use WaitForSingleObject to wait for processes and threads to exit.
WaitForSingleObject, at a specified time (dwmilliseconds), waits for a kernel object to become signaled, during which time the calling thread will sleep if the waiting kernel object has been signaled, or continue execution. After this time, the thread continues to run. The function return value may be: Wait_object_0, Wait_timeout, wait_abandoned (only if the kernel object is mutex), wait_failed.
WaitForMultipleObjects is similar to WaitForSingleObject, except that it is either waiting for a specified list (specified by lphandles) to have several objects (determined by the ncount) to become signaled. Either wait for an object in a list (specified by lphandles) to become signaled (determined by bWaitAll).
The WaitForSingleObject and waitformultipleobjects functions have important side effects on specific kernel objects, that is, depending on the kernel object, they decide whether to change the signal state of the kernel object and perform such a change; Determines whether one of the processes or threads waiting on the kernel object wakes or is awakened.
(1) for process and thread kernel objects, these two functions do not produce side effects.
after the process or thread kernel objects become signaled, they remain signaled, and the functions do not attempt to alter the signal state of the kernel object. In this way, all threads waiting for these kernel objects will be awakened.
(2) for the mutex, auto-reset event, and auto-reset timer objects, the two functions will change their state to no signal.
in other words, once these objects become signaled and one thread is awakened, the object is reset to a signal-free state. So, only one waiting thread wakes up, and the other waiting thread continues to sleep.
(3) There is also a very important feature for the WaitForMultipleObjects function: When the bWaitAll is passed when it is called true, before all the waiting objects become signaled, Any kernel objects that are waiting to be changed will not be reset to a signal-free state. In other words, when the incoming parameter bwaitall is true,waitformultipleobjects, it does not take ownership of a single object unless it can take ownership of all the specified objects (specified by Lphandles) (cannot take ownership, Nature does not change the signal state of this object). This is to prevent deadlocks. In other words, when bWaitAll is true, WaitForMultipleObjects does not change the signal state of a kernel object that can be changed without owning all the objects being equal, and any thread waiting in the same way will not be awakened. But threads waiting in other ways will be awakened.

Thread Synchronization Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.