Multi-thread synchronization)

Source: Internet
Author: User

Thread Synchronization can be divided into user-mode thread synchronization and Kernel Object thread synchronization.
The critical section is a user-mode thread synchronization. It can only be used for multi-thread synchronization in a process.

Kernel synchronization includes:

Management event kernel objects: Cross-process: Only threads in the same process can be synchronized in the critical section, while external threads can be synchronized using event kernel objects, the premise is to obtain access to this event object. You can obtain it through the OpenEvent () function. Its prototype is:

Semaphore Kernel Object: not cross-process: the usage of semaphores makes it more suitable for synchronization of threads in Socket (Socket) programs. For example, the HTTP server on the network needs
The number of users accessing the same page is limited. In this case, no user can set a thread for the server's page request, while the page is the shared resource to be protected, synchronize threads by using semaphores
You can ensure that at any time, no matter how many users access a page, only the threads with a maximum number of users can access the page, while other access attempts are suspended, only when a user exits
This page can only be accessed

Mutex Kernel Object: Mutex is a kernel object that is widely used. Ensure that multiple threads access the same shared resource.
Q. Similar to the critical section, only threads with mutex objects have the permission to access resources. Because there is only one mutex object, therefore, it is determined that the shared resource will not be accessed by multiple threads at the same time under any circumstances.
Q. The thread occupying the resource should hand over the mutex object after the task is processed, so that other threads can access the resource after obtaining it. Unlike other kernel objects, mutex objects have unique
Special code, which is managed by the operating system. The operating system even allows it to perform some unconventional operations that cannot be performed by other kernel objects.

 

Critical section:

CRITICAL_SECTION g_cs;
// Share resources
Char g_cArray [10];
UINT ThreadProc10 (LPVOID pParam)
{
// Enter the critical section
EnterCriticalSection (& g_cs );
// Write shared resources
For (int I = 0; I <10; I ++)
{
G_cArray [I] = 'a ';
Sleep (1 );
}
// Exit the critical section
LeaveCriticalSection (& g_cs );
Return 0;
}
UINT ThreadProc11 (LPVOID pParam)
{
// Enter the critical section
EnterCriticalSection (& g_cs );
// Write shared resources
For (int I = 0; I <10; I ++)
{
G_cArray [10-I-1] = 'B ';
Sleep (1 );
}
// Exit the critical section
LeaveCriticalSection (& g_cs );
Return 0;
}
......
Void CSample08View: OnCriticalSection ()
{
// Initialize the critical section
InitializeCriticalSection (& g_cs );
// Start the thread
AfxBeginThread (ThreadProc10, NULL );
AfxBeginThread (ThreadProc11, NULL );
// Wait until computation is completed
Sleep (300 );
// Report the calculation result
CString sResult = CString (g_cArray );
AfxMessageBox (sResult );
}

Event

// Event handle
HANDLE hEvent = NULL;
// Share resources
Char g_cArray [10];
......
UINT ThreadProc12 (LPVOID pParam)
{
// Wait for the event to be set
WaitForSingleObject (hEvent, INFINITE );
// Write shared resources
For (int I = 0; I <10; I ++)
{
G_cArray [I] = 'a ';
Sleep (1 );
}
// After processing is completed, the event object will be set.
SetEvent (hEvent );
Return 0;
}
UINT ThreadProc13 (LPVOID pParam)
{
// Wait for the event to be set
WaitForSingleObject (hEvent, INFINITE );
// Write shared resources
For (int I = 0; I <10; I ++)
{
G_cArray [10-I-1] = 'B ';
Sleep (1 );
}
// After processing is completed, the event object will be set.
SetEvent (hEvent );
Return 0;
}
......
Void CSample08View: OnEvent ()
{
// Create an event
HEvent = CreateEvent (NULL, FALSE, FALSE, NULL );
// Event location
SetEvent (hEvent );
// Start the thread
AfxBeginThread (ThreadProc12, NULL );
AfxBeginThread (ThreadProc13, NULL );
// Wait until computation is completed
Sleep (300 );
// Report the calculation result
CString sResult = CString (g_cArray );
AfxMessageBox (sResult );
}

Semaphore Kernel Object

The method for synchronizing semaphores (Semaphore) kernel objects to threads is different from the previous methods. It allows multiple threads to access the same resource at the same time.
Source, but you need to limit the maximum number of threads that can access this resource at the same time. In use
When CreateSemaphore () is used to create a semaphore, both the maximum allowed resource count and the current available Resource Count must be specified. Generally, the current available resource count is set to the maximum resource count.
When you add a thread to access shared resources, the current available resource count will be reduced by 1. As long as the current available resource count is greater than 0, a semaphore signal can be sent. However, when the current available count is reduced to 0, the current
The number of threads that occupy resources has reached the maximum allowed number. Other threads cannot be allowed to enter. At this time, the semaphore signal cannot be sent. After processing shared resources, the thread should pass
The ReleaseSemaphore () function adds 1 to the current available resource count. The current available resource count cannot exceed the maximum resource count at any time.


Figure 3 use a semaphore object to control resources

 
The following uses legend 3 to demonstrate the resource control of the semaphore object. In Figure 3, arrows and white arrows indicate the maximum Resource Count and current available resource count allowed for shared resources. As shown in initial (,
The maximum resource count and the current available resource count are both 4. After that, the current resource count will be reduced by 1 for each thread that adds access to the resource (indicated by a black arrow), as shown in figure (B) that is, in three thread pairs
The status of the shared resource. When the number of incoming threads reaches 4, as shown in (c), the maximum Resource Count has been reached, and the current available resource count has been reduced to 0. Other threads cannot share resources.
. After the thread that occupies the resource finishes processing and exits, the space will be released. Figure (d) two threads exit to occupy the resource. The current available count is 2, two more threads can be allowed to enter
Processing resources. It can be seen that semaphores control thread access resources by counting. In fact, semaphores are also called Dijkstra counters.

Email
Number kernel objects for thread synchronization mainly use CreateSemaphore (), OpenSemaphore (), ReleaseSemaphore (),
WaitForSingleObject () and WaitForMultipleObjects () functions. CreateSemaphore () is used
Create a semaphore kernel object. Its function prototype is:

HANDLE CreateSemaphore (
LPSECURITY_ATTRIBUTES l1_maphoreattributes, // Security Attribute pointer
LONG lInitialCount, // initial count
LONG lMaximumCount, // maximum count
Lptstr lpName // Object Name Pointer
);

 
The lMaximumCount parameter is a signed 32-bit value that defines the maximum allowed Resource Count. The maximum value cannot exceed 4294967295. The lpName parameter can be created
The semaphore defines a name. because it creates a kernel object, this semaphore can be obtained in other processes. The OpenSemaphore () function can be used
The number name opens the semaphore created in other processes. The function prototype is as follows:

HANDLE OpenSemaphore (
DWORD dwDesiredAccess, // access flag
BOOL bInheritHandle, // inheritance flag
Lptstr lpName // semaphore name
);

 
When the thread leaves the processing of shared resources, it must use ReleaseSemaphore () to increase the current available resource count. Otherwise, the actual number of threads currently processing shared resources will appear.
It does not reach the value to be limited, but other threads still cannot enter because the current available resource count is 0. The function prototype of ReleaseSemaphore () is:

BOOL ReleaseSemaphore (
HANDLE hSemaphore, // semaphore HANDLE
LONG lReleaseCount, // increase the count
LPLONG lpPreviousCount // previous count
);

 
This function adds the value in lReleaseCount to the current resource count of the semaphore. Generally, it sets lReleaseCount to 1. You can also set other values if needed.
WaitForSingleObject () and WaitForMultipleObjects () are mainly used at the entrance of the thread function trying to enter the shared resource.
Whether the current available resource count of the broken semaphores allows the entry of this thread. Only when the current available resource count is greater than 0 will the monitored semaphore kernel object be notified.

Semaphores
Makes it more suitable for synchronization of threads in Socket (Socket) programs. For example, if the HTTP server on the network needs to limit the number of users accessing the same page at the same time
Set a thread for none of the user's page requests to the server, while the page is a shared resource to be protected, by using semaphores to synchronize threads, you can ensure that no matter how many users have a page at any time.
For access, only threads with a maximum number of users can access this page, while other access attempts are suspended. Access to this page is only possible after a user exits. The following example is provided:
The code shows a similar process:

// Semaphore object handle
HANDLE hSemaphore;
UINT ThreadProc15 (LPVOID pParam)
{
// Try to enter the semaphore threshold
WaitForSingleObject (hSemaphore, INFINITE );
// Thread Task Processing
AfxMessageBox ("the thread is being executed! ");
// Release semaphore count
ReleaseSemaphore (hSemaphore, 1, NULL );
Return 0;
}
UINT ThreadProc16 (LPVOID pParam)
{
// Try to enter the semaphore threshold
WaitForSingleObject (hSemaphore, INFINITE );
// Thread Task Processing
AfxMessageBox ("thread 2 is being executed! ");
// Release semaphore count
ReleaseSemaphore (hSemaphore, 1, NULL );
Return 0;
}
UINT ThreadProc17 (LPVOID pParam)
{
// Try to enter the semaphore threshold
WaitForSingleObject (hSemaphore, INFINITE );
// Thread Task Processing
AfxMessageBox ("thread 3 is being executed! ");
// Release semaphore count
ReleaseSemaphore (hSemaphore, 1, NULL );
Return 0;
}
......
Void CSample08View: OnSemaphore ()
{
// Create a semaphore object
HSemaphore = CreateSemaphore (NULL, 2, 2, NULL );
// Enable the thread
AfxBeginThread (ThreadProc15, NULL );
AfxBeginThread (ThreadProc16, NULL );
AfxBeginThread (ThreadProc17, NULL );
}

 

Mutual Exclusion

// Mutex object
HANDLE hMutex = NULL;
Char g_cArray [10];
UINT ThreadProc18 (LPVOID pParam)
{
// Wait for the notification of the mutex object
WaitForSingleObject (hMutex, INFINITE );
// Write shared resources
For (int I = 0; I <10; I ++)
{
G_cArray [I] = 'a ';
Sleep (1 );
}
// Release the mutex object
ReleaseMutex (hMutex );
Return 0;
}
UINT ThreadProc19 (LPVOID pParam)
{
// Wait for the notification of the mutex object
WaitForSingleObject (hMutex, INFINITE );
// Write shared resources
For (int I = 0; I <10; I ++)
{
G_cArray [10-I-1] = 'B ';
Sleep (1 );
}
// Release the mutex object
ReleaseMutex (hMutex );
Return 0;
}
......
Void CSample08View: OnMutex ()
{
// Create a mutex object
HMutex = CreateMutex (NULL, FALSE, NULL );
// Start the thread
AfxBeginThread (ThreadProc18, NULL );
AfxBeginThread (ThreadProc19, NULL );
// Wait until computation is completed
Sleep (300 );
// Report the calculation result
CString sResult = CString (g_cArray );
AfxMessageBox (sResult );
}

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.