Thread Synchronization
When using multiple threads in a program, there are generally few threads that can perform completely independent operations during their lifetimes. There are more cases where some threads do some processing, and other threads must understand their processing results. A normal understanding of the results of such processing should be done after the completion of their processing tasks.
Without proper action, other threads tend to access processing results before the end of the thread processing task, which is likely to get a bad understanding of the processing results. For example, multiple threads access the same global variable at the same time.If both are read operations, the problem does not occur。 If a thread is responsible for changing the value of this variable, while other threads are responsible for reading the contents of the variable at the same time, there is no guarantee that the data read is modified by the write thread.
To ensure that the read thread reads a modified variable,You must prohibit any access by other threads to the variable when you write data to it,until the assignment process ends and then the access restrictions on other threads are lifted。 The protection that the thread can take to understand the results of the processing of the other thread's tasks is to synchronize the threads as such.
Thread synchronization is a very big topic, including all aspects of the content. On the big side, thread synchronization can be divided intouser-mode thread synchronizationAndthread synchronization of kernel objectsTwo major categories. The synchronous method of the thread in the user mode mainly includes atomic access and critical area method. It is characterized by synchronization speed is particularly fast, suitable for the thread running speed has strict requirements of the occasion.the thread synchronization mechanism of user mode is highly efficient, and if you need to consider thread synchronization problems, you should first consider the thread synchronization method of user mode. However, there is a limit to thread synchronization in user mode, and for thread synchronization between multiple processes, there is nothing that the user-mode thread synchronization method can do. In this case, you can only consider using kernel mode.
thread synchronization of kernel objects consists primarily of kernel objects such as events, wait timers, semaphores, and semaphores.。 BecauseThis synchronization mechanism uses kernel objects and must switch threads from user mode to kernel mode when used, and this conversion typically consumes nearly thousands of CPU cycles, so the synchronization speed is slow,But the applicability is far better than the user-mode thread synchronization method。
First, critical area
A critical section is a piece of code that Critical access to some shared resources, allowing only one thread to access the shared resource at any time. If more than one thread attempts to access the critical section at the same time, all other threads that attempt to access this critical section after one thread has entered will be suspended and continue until the thread entering the critical section leaves. When the critical section is released, other threads can continue to preempt, and in this way achieve the purpose of atomic manipulation of shared resources.
The critical section uses the CRITICAL_SECTION structure object to protect shared resources when used, and uses the EnterCriticalSection () and leavecriticalsection () functions respectively to identify and release a critical section. The CRITICAL_SECTION structure object used must be initialized by InitializeCriticalSection () before it can be used . And you must ensure that any code in any thread that attempts to access this shared resource is under the protection of this critical section. Otherwise the critical section will not play its rightful role, and the shared resources may still be compromised.
Keeping threads synchronized using critical sections
The following code shows the role of the critical section in protecting shared resources for multi-threaded access. The global variable g_carray[10] is written by two threads, and the critical section structure object G_CS is used to maintain thread synchronization and initialize the thread before it is opened. In order to make the experimental effect more obvious, reflect the function of the critical section, when the thread function writes to the shared resource g_carray[10], the sleep () function is delayed by 1 milliseconds, which increases the likelihood that the other thread will preempt the CPU . If you do not use a critical section to protect it, the shared resource data will be destroyed (see Figure 1 (a) for the results shown), while using a critical section to keep the thread in sync will result in the correct results (see Figure 1 (b) for the results shown). The Code implementation Checklist is attached:
Critical section Structure object
Critical_section G_cs;
Shared resources
Char g_carray[10];
UINT ThreadProc10 (LPVOID pparam)
{
Enter the critical section
EnterCriticalSection (&g_cs);
Write to a shared resource
for (int i = 0; i <; i++)
{
G_carray[i] = ' a ';
Sleep (1);
}
Leave the critical section
LeaveCriticalSection (&g_cs);
return 0;
}
UINT ThreadProc11 (LPVOID pparam)
{
Enter the critical section
EnterCriticalSection (&g_cs);
Write to a shared resource
for (int i = 0; i <; i++)
{
G_carray[10-i-1] = ' B ';
Sleep (1);
}
Leave the critical section
LeaveCriticalSection (&g_cs);
return 0;
}
......
void Csample08view::oncriticalsection ()
{
Initialize critical section
InitializeCriticalSection (&g_cs);
Start thread
AfxBeginThread (THREADPROC10, NULL);
AfxBeginThread (THREADPROC11, NULL);
Wait for the calculation to complete
Sleep (300);
Report calculation results
CString Sresult = CString (G_carray);
AfxMessageBox (Sresult);
}
when using a critical section, it is generally not allowed to run too long, as long as the thread entering the critical section has not left, all other threads attempting to enter this critical section will be suspended to enter the waiting state, and will be affected to some extent. The running performance of the program. In particular, it is important not to include operations that wait for user input or some other external intervention into the critical section. If you enter a critical section and you have not released it, it will also cause other threads to wait for a long time. In other words, no matter what happens after executing the entercriticalsection () statement into the critical section, you must ensure that the matching leavecriticalsection () can be executed. You can ensure the execution of the LeaveCriticalSection () statement by adding structured exception handling code. Although critical section synchronization is fast, it can only be used to synchronize threads within this process and not to synchronize threads in multiple processes .
Ii. Managing event Kernel objects
In addition to the event kernel objects used to communicate between threads when talking about thread communication, event kernel objects can also maintain thread synchronization by notifying operations. The thread synchronization method for the event object that uses the critical section to maintain thread synchronization for the preceding paragraph is rewritten as follows:
Event handle
HANDLE hevent = NULL;
Shared resources
Char g_carray[10];
......
UINT ThreadProc12 (LPVOID pparam)
{
//wait for event placement
WaitForSingleObject (hevent, INFINITE);
Write to a shared resource
for (int i = 0; i <; i++)
{
G_carray[i] = ' a ';
Sleep (1);
}
The event object is placed after processing is complete
SetEvent (hevent);
return 0;
}
UINT ThreadProc13 (LPVOID pparam)
{
Wait for event placement
WaitForSingleObject (hevent, INFINITE);
Write to a shared resource
for (int i = 0; i <; i++)
{
G_carray[10-i-1] = ' B ';
Sleep (1);
}
The event object is placed after processing is complete
SetEvent (hevent);
return 0;
}
......
void Csample08view::onevent ()
{
Create Event
hevent = CreateEvent (null, FALSE, FALSE, NULL);
Event-Placement
SetEvent (hevent);
Start thread
AfxBeginThread (THREADPROC12, NULL);
AfxBeginThread (ThreadProc13, NULL);
Wait for the calculation to complete
Sleep (300);
Report calculation results
CString Sresult = CString (G_carray);
AfxMessageBox (Sresult);
}
Before creating the thread, first create an event kernel object hevent that can be automatically reset, while the thread function waits for the function to wait indefinitely for hevent by WaitForSingleObject (). The protected code will be executed only if WaitForSingleObject () is returned when the event is placed. for an event object created with an auto-reset , the event object is reset once it is set to WaitForSingleObject () and so on until the protected code in THREADPROC12 () is executed. Even if there is a ThreadProc13 () CPU preemption, it will not be able to continue execution because WaitForSingleObject () does not have hevent, and there is no possibility of destroying protected shared resources. After the processing in THREADPROC12 () is complete, the setevent () can be used to set the hevent to allow THREADPROC13 () to handle the shared resource G_carray. the role of SetEvent () here can be seen as a general understanding of the completion of a particular task .
Using a critical section can only synchronize threads in the same process, while using an event kernel object can synchronize out-of-process threads, provided that access to this event object is obtained. Can be obtained through the openevent () function, whose function prototype is:
HANDLE OpenEvent (
DWORD dwdesiredaccess,//Access flag
BOOL bInheritHandle,//Inheritance flag
LPCTSTR lpname//Pointer to event object name
);
If the event object is created (you need to specify the event name when creating the event), the function returns a handle to the specified event. For event kernel objects that do not specify an event name when creating an event, you can call CreateEvent () by using the kernel object's inheritance or by calling the DuplicateHandle () function to gain access to the specified event object. The synchronization operation that takes place after gaining access is the same as the thread synchronization performed in the same process.
If you need to wait for multiple events in one thread, wait with WaitForMultipleObjects () . WaitForMultipleObjects () is similar to WaitForSingleObject (), while monitoring all handles in the handle array. The handles of these monitored objects have equal precedence, and no one handle can have a higher precedence than the other handles. The function prototypes for WaitForMultipleObjects () are:
DW ORD waitformultipleobjects (
DWORD ncount,//wait handle number
CONST HANDLE *lphandles,//handle array first address
BOOL fWaitAll,//wait sign br> DWORD dwmilliseconds//wait time interval
);
The parameter ncount specifies the number of kernel objects to wait for, and the array that holds the kernel objects is pointed to by Lphandles. The fWaitAll specifies two wait modes for the specified Ncount kernel object, True when all objects are notified and the function returns, False if any of them are notified . The role of dwmilliseconds here is exactly the same as in the WaitForSingleObject (). If the wait timeout occurs, the function returns WAIT_TIMEOUT. If you return a value from Wait_object_0 to Wait_object_0+ncount-1, the state of all the specified objects is the state of the notification (when fWaitAll is true) or to subtract wait_object_ 0 The index of the object from which the notification occurred (when fWaitAll is false). If the return value is between Wait_abandoned_0 and Wait_abandoned_0+ncount-1, then the state of all the specified objects is notified, and at least one of the objects is a discarded mutex (when fWaitAll is true). or subtract wait_object_0 to indicate an index of a mutex that waits for a normal end (when fWaitAll is false). The code given below mainly shows the use of the WaitForMultipleObjects () function. Control the execution and exit of thread tasks by waiting on two event kernel objects:
Array that holds the event handle
HANDLE hevents[2];
UINT ThreadProc14 (LPVOID pparam)
{
Waiting for an Open event
DWORD DwRet1 = WaitForMultipleObjects (2, Hevents, FALSE, INFINITE);
The thread starts to perform the task if the turn on event arrives
if (DwRet1 = = wait_object_0)
{
AfxMessageBox ("Thread starts working!");
while (true)
{
for (int i = 0; i < 10000; i++);
Waiting for an end event during task processing
DWORD DwRet2 = WaitForMultipleObjects (2, hevents, FALSE, 0);
Terminates the execution of a task immediately if the end event is placed
if (DwRet2 = = wait_object_0 + 1)
Break
}
}
AfxMessageBox ("Thread exits!");
return 0;
}
......
void Csample08view::onstartevent ()
{
Creating Threads
for (int i = 0; i < 2; i++)
Hevents[i] = CreateEvent (null, FALSE, FALSE, NULL);
Open Thread
AfxBeginThread (ThreadProc14, NULL);
Set event 0 (Turn on event)
SetEvent (Hevents[0]);
}
void Csample08view::onendevent ()
{
Set Event 1 (End event)
SetEvent (Hevents[1]);
}
Third,Semaphore Kernel Object
Semaphore (Semaphore) kernel objects synchronize threads differently than in the previous methods, which allow multiple threads to access the same resource at the same time, but need to limit the maximum number of threads that access this resource at the same time. When you create a semaphore with CreateSemaphore (), you indicate both the maximum allowable resource count and the current available resource count. In general, the current available resource count is set to the maximum resource count, and each additional thread accesses the shared resource, the current available resource count is reduced by 1, and the semaphore signal can be emitted as long as the current available resource count is greater than 0. However, the current available count is reduced to 0 o'clock indicating that the number of threads currently occupying the resource has reached the maximum allowable number, and the semaphore signal will not be able to be emitted when other threads are allowed to enter. After the thread has finished processing the shared resource, the current available resource count should be added by 1 at the same time as the ReleaseSemaphore () function is left. At any time the currently available resource count is never greater than the maximum resource count.
The following illustration illustrates the control of a semaphore object to a resource. In Figure 3, the arrows and white arrows indicate the maximum number of resource counts and the current available resource count allowed for a shared resource. As shown in initial (a), the maximum resource count and the currently available resource count are 4, and thereafter each additional thread that accesses the resource (denoted by a black arrow) will have the current resource count minus 1, and figure (b) represents the state when 3 threads access the shared resource. When the number of incoming threads reaches 4, as shown in (c), the maximum resource count has been reached, and the current available resource count has been reduced to 0, and other threads cannot access the shared resource. After the thread of the currently occupied resource exits, it will free up space, figure (d) Two threads have exited the possession of the resource, the current available count is 2, can allow 2 more threads to enter the processing of the resource. As you can see, the semaphore is controlled by a count of thread access resources, and indeed the semaphore is actually called the Dijkstra counter.
Thread synchronization using semaphore kernel objects mainly uses CreateSemaphore (), OpenSemaphore (), ReleaseSemaphore (), Functions such as WaitForSingleObject () and WaitForMultipleObjects (). where CreateSemaphore () is used to create a semaphore kernel object whose function prototype is:
HANDLE CreateSemaphore (
Lpsecurity_attributes lpsemaphoreattributes,//Security attribute pointer
LONG lInitialCount,//Initial count
LONG lMaximumCount,//MAX Count
LPCTSTR lpname//object name pointer
);
The parameter lmaximumcount is a signed 32-bit value that defines the maximum allowable resource count, and the maximum value cannot exceed 4294967295. The lpname parameter defines a name for the semaphore that is created, because it creates a kernel object that can be obtained by that name in other processes . The OpenSemaphore () function can be used to open semaphores created in other processes based on the semaphore name , the function prototype is as follows:
HANDLE OpenSemaphore (
DWORD dwdesiredaccess,//Access flag
BOOL bInheritHandle,//Inheritance flag
LPCTSTR lpname//Signal volume name
);
When a thread leaves processing of a shared resource, the current available resource count must be increased by ReleaseSemaphore (). Otherwise, the actual number of threads that are currently working on the shared resource does not reach the value to limit, while other threads are still inaccessible because the current available resource count is 0. The function prototypes for ReleaseSemaphore () are:
BOOL ReleaseSemaphore (
HANDLE Hsemaphore,//semaphore handle
LONG lReleaseCount,//Count increment quantity
Lplong Lppreviouscount//Previous Count
);
This function adds the value in lReleaseCount to the current resource count for semaphores, typically sets lReleaseCount to 1, and optionally sets additional values if needed. WaitForSingleObject () and WaitForMultipleObjects () are primarily used in the entrance of a thread function attempting to enter a shared resource, primarily to determine whether the current available resource count for semaphores allows the entry of this thread. The monitored semaphore kernel object will be notified only if the current available resource count is greater than 0 o'clock.
The use of semaphores makes it more suitable for synchronizing the threads of a socket (socket) program . For example, an HTTP server on a network that restricts the number of users accessing the same page at the same time can set a thread for a page request to a server without a user, and the page is a shared resource to be protected. By using semaphores to synchronize threads, you can ensure that no matter how many users access a page at any one time, only a thread that is not larger than the set maximum number of users is able to access it, while the other attempts to access it are suspended and can only be entered after a user exits access to the page. The sample code given below shows a similar process:
Semaphore Object Handle
HANDLE Hsemaphore;
UINT ThreadProc15 (LPVOID pparam)
{
Attempting to enter the semaphore threshold
WaitForSingleObject (Hsemaphore, INFINITE);
Thread Task Processing
AfxMessageBox ("line Cheng is executing!");
Release semaphore Count
ReleaseSemaphore (Hsemaphore, 1, NULL);
return 0;
}
UINT THREADPROC16 (LPVOID pparam)
{
Attempting to enter the semaphore threshold
WaitForSingleObject (Hsemaphore, INFINITE);
Thread Task Processing
AfxMessageBox ("Thread two is executing!");
Release semaphore Count
ReleaseSemaphore (Hsemaphore, 1, NULL);
return 0;
}
UINT ThreadProc17 (LPVOID pparam)
{
Attempting to enter the semaphore threshold
WaitForSingleObject (Hsemaphore, INFINITE);
Thread Task Processing
AfxMessageBox ("line Cheng is executing!");
Release semaphore Count
ReleaseSemaphore (Hsemaphore, 1, NULL);
return 0;
}
......
void Csample08view::onsemaphore ()
{
Creating Semaphore objects
Hsemaphore = CreateSemaphore (null, 2, 2, NULL);
Open Thread
AfxBeginThread (THREADPROC15, NULL);
AfxBeginThread (THREADPROC16, NULL);
AfxBeginThread (THREADPROC17, NULL);
Four,Mutex Kernel objects
Mutex (mutex) is a very versatile kernel object. The ability to guarantee mutually exclusive access to the same shared resource by multiple threads. Similar to the critical section, only the line friend with the mutex object has access to the resource,because there is only one mutex object, it is determined that the shared resource will not be accessed by multiple threads at the same time in any case. The thread that currently occupies the resource should hand over the owning mutex after the task has been processed so that other threads can access the resource after it is acquired. Unlike several other kernel objects, mutex objects areOperating systemhas special code in it and is managed by the operating system, and the operating system even allows it to perform unconventional operations that other kernel objects cannot.
Summary
The use of threads makes program processing more flexible, and this flexibility can also lead to uncertainties. This is especially true when multiple threads are accessing the same public variable. While program code that does not use thread synchronization may not be a logical problem, it is necessary to take thread synchronization measures in place to ensure that the program is running correctly and reliably.
Windows multithreaded Synchronization Technology