Windows core programming-Thread Synchronization

Source: Internet
Author: User
Thread Synchronization
Because all threads of the same process share the virtual address space of the process, and the thread interruption is at the assembly language level, therefore, two threads may access the same object at the same time (including global variables, shared resources, API functions, and MFC objects), which may lead to program errors. Threads of different processes access the same memory area or share resources at the same time. Therefore, in multi-threaded applications, some measures are often required to synchronize thread execution.
Synchronization is required in the following situations:

An error may occur when multiple threads access the same object at the same time. For example, if a thread is reading a crucial shared buffer and another thread writes data to the buffer, the program running results may fail. The program should try to avoid multiple threads accessing the same buffer zone or system resources at the same time.

In

In Windows 95, you also need to consider the re-entry issue when writing multi-threaded applications. Windows NT is a real 32-bit operating system, which solves the problem of system re-entry. Windows 95 does not solve the reconnection problem because it inherits part of the 16-bit code of Windows 3. X. This means that two threads in Windows 95 cannot execute a system function at the same time, otherwise it may cause program errors or even system crashes. Applications should try to avoid the situation where more than two threads call the same Windows API function at the same time.

Due to the size and performance,

The MFC object is not thread-safe at the object level, but only at the class level. That is to say, two threads can safely use two different cstring objects, but using the same cstring object at the same time may cause problems. If the same object must be used, appropriate synchronization measures should be taken.

Multiple Threads must be coordinated. For example, if the second thread needs to wait until the first thread completes to a certain step, the thread should be temporarily suspended to reduce the number

CPU usage time to improve program execution efficiency. After the first thread completes the corresponding steps, it should send a signal to activate the second thread.

Key Section and mutual lock variable access

Critical seciton is similar to mutex, but it can only be used by threads in the same process. The key section prevents shared resources from being accessed at the same time.

The process allocates memory space for the key sections. The key section is actually a critical_section variable, which can only be owned by one thread at a time. Before a thread uses a key section, it must call the initializecriticalsection function to initialize it. If a key piece of code in a thread does not want to be interrupted by another thread, you can call the entercriticalsection function to apply for the ownership of the key section. After running the key code, use the leavecriticalsection function to release the ownership. If the key section object is already owned by another thread when you call entercriticalsection, the function will be held for an indefinite period of ownership.

Mutual lock variables can be used to establish a simple and effective synchronization mechanism. The interlockedincrement and interlockeddecrement functions can increase or decrease the value of a 32-bit variable shared by multiple threads and check whether the result is 0. Threads do not have to worry about being interrupted by other threads and cause errors. If the variable is in the shared memory, threads in different processes can also use this mechanism.

Atomic access

The so-called Atomic access means that when a thread accesses a resource, it can ensure that all other threads do not access the same resource at the same time. Mutual lock function family:

 LONG InterlockedExchangeAdd(
   PLONG plAddend,
   LONG Increment);

This is the simplest function. You only need to call this function, pass a long variable address, and specify the increment value. However, this function ensures that the value increments by atomic operations.

LONG InterlockedExchange(PLONG plTarget,   LONG lValue);PVOID InterlockedExchangePointer(PVOID* ppvTarget,   PVOID pvValue);

I n t e r l o c k e d e x c h a n g e and I n t e r l o C K e d e x C H a N G E P o I n t e r can replace the current value passed in the first parameter with the value passed in the second parameter in atomic operation mode.

---------------------------- The above is user synchronization, and the following is kernel synchronization ---------------------------------

Wait Function

The waiting function enables the thread to voluntarily enter the waiting state until a specific kernel object changes to the notified State.

 DWORD WaitForSingleObject(HANDLE hObject,
   DWORD dwMilliseconds);

Function wa I t f o r m u l t I p L e o B j e c t s and Wa I t f o r s I n g l e o B j e c t functions are very similar, the difference is that it allows the calling thread to view the notified status of several kernel objects at the same time:

 

DWORD WaitForMultipleObjects(DWORD dwCount,   CONST HANDLE* phObjects,    BOOL fWaitAll,    DWORD dwMilliseconds);

Synchronization object

A synchronization object is used to coordinate multi-thread execution. It can be shared by multiple threads. The thread wait function uses the synchronization object handle as the parameter. The synchronization object should be accessible to all the threads to be used. The status of the synchronization object is either signal or signal-free. There are three main types of synchronization objects: events, mutex, and traffic signals.

Event is the simplest synchronization object, which includes two states: signal and no signal. Before a thread accesses a resource, it may need to wait for an event to occur. In this case, it is most appropriate to use the event object. For example, the monitoring thread is activated only when the communication port buffer receives data.

The event object is created using the createevent function. This function can specify the type of the event object and the initial state of the event. If it is a manual reset event, it always remains in a signal State until it is reset to a non-signal event using the resetevent function. If an event is automatically reset, its status will automatically become unresponsive after a single waiting thread is released. You can use setevent to set the event object to a signal state. When creating an event, you can name the object. In this way, threads in other processes can use the openevent function to open the event object handle with the specified name.

The state of a mutex object has a signal when it is not owned by any thread, but when it is owned, it has no signal. Mutex objects are suitable for coordinating mutex access (mutually exclusive) to shared resources by multiple threads ).

The thread uses the createmutex function to create a mutex object. When a mutex object is created, a name can be set for the object. In this way, threads in other processes can use the openmutex function to open the mutex object handle with the specified name. After accessing the shared resources, the thread can call releasemutex to release the mutex so that other threads can access the shared resources. If the thread is terminated and mutex is not released, the mutex is discarded.

The signal object maintains a count starting from 0. When the Count value is greater than 0, the object has a signal, while when the Count value is 0, there is no signal. The traffic light object can be used to limit the number of threads used to access shared resources. The thread uses the createsemaphore function to create a traffic signal object. When calling this function, you can specify the initial count and maximum count of the object. When creating a signal lamp, you can also name the object. threads in other processes can use the opensemaphore function to open the signal lamp handle with the specified name.

Generally, the initial count of a traffic signal is set to the maximum value. Each time when the signal lamp has a signal to wait for the function to return, the signal lamp count will be reduced by 1, and calling releasesemaphore can increase the signal lamp count. The smaller the Count value, the more programs access shared resources.

Objects that can be synchronized

 

Object

Description

Change Notification

By

The findfirstchangenotification function is created. When a change of the specified type occurs in the specified directory, the object becomes a signal.

Console input

Created on the console. It is used

Conin $ calls the handle returned by the createfile function or the return handle of the getstdhandle function. If data exists in the console input buffer, the object has a signal. If the buffer is empty, the object has no signal.

Process

When

CreateProcess is created when a process is created. When a process is running, the object has no signal. when the process is terminated, the object has a signal.

Thread

When

The CreateProcess, createthread, or createremotethread function is created when a new thread is created. When a thread is running, the object has no signal, and when the thread is terminated, there is a signal.

In addition, files or communication devices can be used as synchronization objects.

Event Kernel Object

Let's look at a simple example to illustrate how to use the event kernel object to synchronize threads. The following code is used:

 

// Create a global handle to a manual-reset, nonsignaled event.HANDLE g_hEvent;int WINAPI WinMain(...) {   //Create the manual-reset, nonsignaled event.   g_hEvent = CreateEvent(NULL, TRUE, FALSE, NULL);   //Spawn 3 new threads.   HANDLE hThread[3];   DWORD dwThreadID;   hThread[0] = _beginthreadex(NULL, 0, WordCount, NULL, 0, &dwThreadID);   hThread[1] = _beginthreadex(NULL, 0, SpellCheck, NULL, 0, &dwThreadID);   hThread[2] = _beginthreadex(NULL, 0, GrammarCheck, NULL, 0, &dwThreadID);   OpenFileAndReadContentsIntoMemory(...);   //Allow all 3 threads to access the memory.   SetEvent(g_hEvent);   ...}DWORD WINAPI WordCount(PVOID pvParam){   //Wait until the file's data is in memory.   WaitForSingleObject(g_hEvent, INFINITE);   //Access the memory block.   ...   return(0);}DWORD WINAPI SpellCheck(PVOID pvParam){   //Wait until the file's data is in memory.   WaitForSingleObject(g_hEvent, INFINITE);   //Access the memory block.   ...   return(0);}DWORD WINAPI GrammarCheck(PVOID pvParam){   //Wait until the file's data is in memory.   WaitForSingleObject(g_hEvent, INFINITE);   //Access the memory block.   ...   return(0);}

When the process starts, it creates an event with no notification Status manually reset and stores the handle in a global variable. This makes it easy for other threads in the process to access the same event object. Now three threads have been generated. These threads need to wait for the file content to be read into the memory, and each thread needs to access its data. One thread counts words, the other thread runs the spelling checker, and the third thread runs the syntax checker. The code starting part of these three thread functions is the same. Each function calls the wa I t f o r s I n g l e o B j e c t, this will pause the thread until the file content is read into the memory by the main thread.

Once the main thread prepares the data, it calls s e t e v e n t to send a notification to the event. At this time, the system enables all three auxiliary threads to enter the schedulable state, which obtain the c p u time and can access the memory block. Note that all three threads access the memory in read-only mode. This is the only reason why all three threads can run simultaneously. Also note how to configure multiple c p u on the computer, so that all three threads can actually run at the same time, so that a large number of operations can be completed in a short time.

If you use auto-reset events instead of manual reset events, the behavior characteristics of the application are very different. After the main thread calls s e t e v e n t, the system allows only one auxiliary thread to change to a schedulable state. Likewise, it cannot be guaranteed which thread the system will change to the schedulable state. The remaining two auxiliary threads will continue to wait.

A thread that has changed to a schedulable state has exclusive access to the memory block. Let's re-compile the thread function so that each function calls the s e t e v e n t function before returning (as the WI n m a I n function does ). These thread functions are now in the following form:

 

DWORD WINAPI WordCount(PVOID pvParam){   //Wait until the file's data is in memory.   WaitForSingleObject(g_hEvent, INFINITE);   //Access the memory block.   ...   SetEvent(g_hEvent);   return(0);}DWORD WINAPI SpellCheck(PVOID pvParam) {   //Wait until the file's data is in memory.   WaitForSingleObject(g_hEvent, INFINITE);   //Access the memory block.   ...   SetEvent(g_hEvent);   return(0);}DWORD WINAPI GrammarCheck(PVOID pvParam){   //Wait until the file's data is in memory.   WaitForSingleObject(g_hEvent, INFINITE);   //Access the memory block.   ...   SetEvent(g_hEvent);   return(0);}

When the thread completes its special data transmission, it calls the s e t e v e n t function, this function allows the system to make one of two waiting threads A schedulable thread. Similarly, we do not know which thread the system will choose as the schedulable thread, but this thread will implement its own dedicated transmission of memory blocks. When this thread completes the operation, it will also call the s e t e v e n t function, so that the third, that is, the last thread, will pass its own memory block. Note: When an event is automatically reset, if each auxiliary thread accesses the memory block in read/write mode, no problems will occur, these threads are no longer required to regard data as read-only data.

Wait for the timer Kernel Object

Waiting for a timer is the kernel object that sends its own signal notification at a certain time or at a specified interval. They are usually used to execute an operation at a certain time.

To create a wait timer, you only need to call the C r e a t e wa I t a B L E ti m e r function:

 

HANDLE CreateWaitableTimer(   PSECURITY_ATTRIBUTES psa,   BOOL fManualReset,   PCTSTR pszName);

A process can obtain its own process-related existing wait timer handle by calling the o p e n wa I t a B L E ti m e r function:

 

HANDLE OpenWaitableTimer(   DWORD dwDesiredAccess,   BOOL bInheritHandle,   PCTSTR pszName);

When a manual reset timer signal is sent, all threads waiting for the timer become schedulable threads. When a timer signal is automatically reset, only one waiting thread changes to a schedulable thread.

Wait for the timer object to always be created in the not notified status. You must call the s e t wa I t a B L E ti m e r function to tell the timer when you want to make it notification:

 

BOOL SetWaitableTimer(   HANDLE hTimer,   const LARGE_INTEGER *pDueTime,   LONG lPeriod,   PTIMERAPCROUTINE pfnCompletionRoutine,   PVOID pvArgToCompletionRoutine,   BOOL fResume);

Besides the timer function, there is a c a n c e l wa I t a B L E ti m e r function:

 

BOOL CancelWaitableTimer(HANDLE hTimer);

This simple function is used to retrieve the timer handle and undo it, unless you call the s e t wa I t a B L E ti m e r function to reset the timer, otherwise, the timer will never report the time.

Beacon Kernel Object

The beacon kernel object is used to count resources. They are the same as all kernel objects and contain one usage quantity, but they also contain two other signed 3-2 bits. One is the maximum number of resources, and the other is the current number of resources. The maximum number of resources is used to identify the maximum number of resources that can be controlled by the beacon, and the current number of resources is used to identify the number of resources currently available.

The usage rules of the beacon are as follows:

• If the number of current resources is greater than 0, a beacon signal is sent.

• If the current number of resources is 0, no beacon signal is sent.

• The system will never allow the current number of resources to be negative.

• The current resource quantity cannot exceed the maximum resource quantity.

The following functions are used to create a beacon kernel object:

HANDLE CreateSemaphore(   PSECURITY_ATTRIBUTE psa,   LONG lInitialCount,   LONG lMaximumCount,   PCTSTR pszName);

By calling the o p e n s e m a p H o r e function, another process can obtain its own Process Handle related to the existing Beacon:

 

HANDLE OpenSemaphore(   DWORD fdwAccess,   BOOL bInheritHandle,   PCTSTR pszName);

By calling the r e l e a s e m a p H o r e function, the thread can increase the number of current resources of the Beacon:

 

BOOL ReleaseSemaphore(   HANDLE hsem,   LONG lReleaseCount,   PLONG plPreviousCount);

Mutually Exclusive object Kernel Object

The mutex (m u t e x) kernel object can ensure that the thread has mutex access to a single resource.
Mutex objects have many purposes and are one of the most common kernel objects. Generally, they are used to protect memory blocks accessed by multiple threads. If multiple threads need to access the memory block at the same time, the data in the memory block may be damaged. The mutex object can ensure that any thread accessing the memory block has exclusive access to the memory block, thus ensuring data integrity.

The rules for using mutex objects are as follows:

• If thread I d is 0 (this is invalid I D), the mutex object is not owned by any thread and a notification signal is sent for this mutex object.

• If I D is a non-zero number, a thread will have a mutex object and will not send a notification to this mutex object.

• Unlike all other kernel objects, mutex objects have special code in the operating system, allowing them to violate normal rules (this exception will be described later ).

To use a mutex object, a process must first call C r e a t e m u t e x to create a mutex object:

HANDLE CreateMutex(   PSECURITY_ATTRIBUTES psa,   BOOL fInitialOwner,   PCTSTR pszName);

By calling o p e n m u t e x, another process can obtain the handle related to its own process and the existing mutex object:

 

HANDLE OpenMutex(   DWORD fdwAccess,   BOOL bInheritHandle,   PCTSTR pszName);

Once the thread successfully waits for a mutex object, the thread will know that it has exclusive access to protected resources. Any other thread that tries to access the resource (by waiting for the same mutex object) is put in the waiting state. When a thread with access to resources no longer needs access, it must call the r e l e a s e m u t e x function to release the mutex:

 

BOOL ReleaseMutex(HANDLE hMutex);

This function reduces the recursive counter of an object by 1.

Comparison between mutex objects and key code segments

In terms of waiting for thread scheduling, mutex objects share the same characteristics with key code segments. However, they are different in other attributes. Table 9-1 compares them.

 

Table 9-1 Comparison Between mutex objects and key code segments

Features Mutex object Key code segment
Running Speed Slow Fast
Whether it can be used across process boundaries Yes No
Statement Handle hctx; Critical_section Cs;
Initialization H m T x = C r e a t e m u t e x (n u L, fa l s e, n u L ); I n I t I a l I z e c r I t I c a l e c t I o n (& E S );
Clear C l o s e h a n d l e (h m t x ); D e l e t e c r I I c a l s e c t I o n (& C S );
Unlimited waiting Wa I t f o r s I n g l e o B j e c t (h m t x, I n f I n I t e ); E n t e r c r I I c a l s e c t I o n (& C S );
0 wait Wa I t f o r s I n g l e o B j e c t tr y (h m T X, 0 ); E n t e r c r I I c a l s e c t I o n (& C S );
Arbitrary waiting Wa I t f o r s I n g l e o B j e c t (h m T x, d w m I l I s e c o n d S ); No
Release R e l e a s e m u t e x (h m t x ); L e a v e c r I t I c a l s e c t I o n (& C S );
Can I wait for other kernel objects? Yes (use wa I t f o r m u l t I p L e o B j e c t s or similar functions) No
Thread synchronization object speed query table

Relationship between kernel objects and thread synchronization

Object When is not notified When is in the notified status Side effects of waiting for success
Process When the process is still active When the process stops running (e x I t p r o c e s, te r m I n a T E P R o C E S) None
Thread When the thread is still active When the thread stops running (e x I t h r e a D, te r m I n a t e t h r e a D) None
Job When the job time has not ended When the job time has ended None
File When I/O requests are being processed When I/O requests are processed None
Console input No input exists When input exists None
File Modification notification No files modified When the file system finds the modification Reset notification
Auto reset event R e s e t e v e n t, p u l s e-e v e n t or wait for success When s e t e v e n t/p u l s e v e n t is called Reset event
Manual reset event R e s e t e v e n t or p u l s e v e n t When s e t e v e n t/p u l s e v e n t is called None
Automatic Reset wait Timer C a n c e l wa I t a B L E ti m e r or wait for success When the time arrives (s e t wa I t a B L E ti m e r) Reset timer
Manual reset wait Timer C a n c e l wa I t a B L E ti m e r When the time arrives (s e t wa I t a B L E ti m e r) None
Beacon Wait for success When the number is greater than 0 (r e l e a s e m a p H o r e) Decrease by 1
Mutex object Wait for success When not owned by the thread (r e l e a s e mutex object) Assign ownership to the thread
Key code segment (User Mode) Wait for Success (TR y) e n t e r c r I I c a l s e c t I o n) When not owned by a thread (l e a v e c r I t I c a l s e c t I o n) Assign ownership to the thread

Other thread synchronization Functions

1 asynchronous device I/O enables the thread to start a read or write operation, but does not have to wait for the read or write operation to complete. For example, if the thread needs to load a large file into the memory, the thread can tell the system to load the file into the memory. Then, when the system loads the file, the thread can be busy executing other tasks, such as creating a window and initializing the internal data structure. When the initialization operation is complete, the thread can terminate its own operation and wait for the system to notify it that the file has been read.

Two threads can also call wa I t f o r I n p u t I d l e to terminate their own operations:

 

DWORD WaitForInputIdle(   HANDLE hProcess,   DWORD dwMilliseconds);

This function will remain in the waiting state until the process identified by h p r o c e s has no unprocessed input in the thread in the first window of application creation. This function can be used by the parent process. The parent process generates child processes for certain operations.
3 threads can call m s g wa I t f o r m u l t I p L e o B j e c t s or m s g wa I t f o r m u l t I p L e o B j e c t s e x function, let the thread wait for its own message:

 

DWORD MsgWaitForMultipleObjects(   DWORD dwCount,   PHANDLE phObjects,   BOOL fWaitAll,   DWORD dwMilliseconds,   DWORD dwWakeMask);DWORD MsgWaitForMultipleObjectsEx(   DWORD dwCount,   PHANDLE phObjects,   DWORD dwMilliseconds,   DWORD dwWakeMask,   DWORD dwFlags);

These functions are very similar to wa I t f o r m u l t I p L e o B j e c t s functions. The difference is that they allow threads to be scheduled when kernel objects become notified or window messages need to be scheduled to the window created by the call thread.

4 wi n d o w s will provide excellent debugging support features built into the operating system. When the debug program starts to run, it attaches itself to a program to be debugged. The debugging program only needs to be idle, waiting for the operating system to notify it of debugging events related to the program to be debugged. The debugger calls the wa I t f o r d e B u g e v e n t function to wait for these events to occur:

 

BOOL WaitForDebugEvent(   PDEBUG_EVENT pde,   DWORD dwMilliseconds);

When a debug program calls this function, the thread of the debug program stops running, and the system notifies the debug program of the occurrence of the debug event, the method is to allow the called wa I t f o r d e B u g e v e n t function to return.

The 5 S I n g l e o B j e c t a n d wa I t function is used to send notifications about kernel objects in a single atomic operation and wait for another kernel object:

 

DWORD SingleObjectAndWait(   HANDLE hObjectToSignal,   HANDLE hObjectToWaitOn,   DWORD  dwMilliseconds,   BOOL   fAlertable);

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.