Multithreading synchronization and Mutual exclusion (3)

Source: Internet
Author: User
In multithreaded programming, it is inevitable to encounter two problems, that is the mutual exclusion and synchronization between threads:
Thread synchronization is a constraint relationship between threads, and the execution of one thread relies on messages from another thread and waits when it does not get messages from another thread until the message arrives.
Thread mutex refers to the exclusive nature of the shared process system resources that are accessed by individual threads. When a shared resource is used by several threads, at any time only one thread is allowed to use it, other threads that want to use the resource must wait until the resource is freed by the resource. A thread mutex can be considered a special thread synchronization (hereinafter referred to as synchronization).

The synchronization methods between threads can be divided into two categories: User mode and kernel mode. As the name suggests, kernel mode refers to the use of the system kernel object of the single to synchronize, the use of the need to switch between the kernel State and user state, and user mode is not required to switch to the kernel state, only in the user state to complete the operation.
The methods in user mode are: atomic operation (e.g. a single global variable), critical region. The methods in kernel mode are: event, semaphore, mutex.
Let's take a look at these methods separately:

Atomic operations (global variables):#include "stdafx.h"
#include "Windows.h"
#include "stdio.h"

volatile int threaddata = 1;

void Threadprocess ()
{
for (int i=0; i<6; i++)
{
Sleep (1000);
printf ("Sub Thread Tick%5d!\n", (i+1) *1000);
}
Threaddata = 0;
printf ("Exit Sub thread!\n");

}

int _tmain (int argc, _TCHAR * argv[])
{
HANDLE Hthread;
DWORD ThreadID;
Hthread=createthread (NULL,
0,
(Lpthread_start_routine) Threadprocess,
Null
0,
&threadid);

while (Threaddata)
{
printf ("Main Thread is waiting for Sub thread!\n");
Sleep (600);
}

printf ("Main Thread finished! \ n ");
System ("pause");
return 0;
}

In the above program, I took advantage of the global variable Threaddata to synchronize between threads, when the child thread at the end of the change of the value, and the parent thread loops to determine whether the child thread has ended, when the child thread ends, the parent line Cheng continue to do the following operation.

Critical Area (Critical section)

An easy way to ensure that only one thread can access the data at a certain point in time. Only one thread is allowed to access the shared resource at any time. If more than one thread attempts to access the critical section at the same time, all other threads that attempt to access this critical area will be suspended after one thread enters, and the thread that continues into the critical section leaves. After the critical section is freed, other threads can continue to preempt and thus achieve the purpose of operating the shared resource in an atomic manner.

The critical section contains two operational primitives:
EnterCriticalSection () Enter critical section
LeaveCriticalSection () out of critical zone

After the EnterCriticalSection () statement executes, the code will enter the critical section, and no matter what happens, you must ensure that the matching leavecriticalsection () can be executed. Otherwise, the shared resources protected by the critical section will never be freed. Although critical areas are synchronized quickly, they can only be used to synchronize threads within this process, not to synchronize threads in multiple processes.


Events (Event)

Event objects can also maintain synchronization of threads by notification actions. And you can implement thread synchronization operations in different processes.

Several operational primitives contained in the semaphore:
CreateEvent () Create a semaphore
OpenEvent () opens an event
SetEvent () Callback Event
WaitForSingleObject () waiting for an event
WaitForMultipleObjects () Waiting for multiple events

WaitForMultipleObjects function Prototype:
WaitForMultipleObjects (
In DWORD ncount,//wait handle number
In CONST HANDLE *lphandles,//pointing handle array
In BOOL bWaitAll,//Whether completely wait for flag
In DWORD dwmilliseconds//wait Time


The parameter ncount specifies the number of kernel objects to wait, and the array that holds the kernel objects is pointed to by Lphandles. fWaitAll specifies two ways of waiting for the specified Ncount kernel object, which returns when all objects are notified, and false if any one of them is notified. The role of dwmilliseconds here is exactly the same as the role in WaitForSingleObject (). If the wait times out, the function returns to Wait_timeout.

Events can implement thread synchronization operations in different processes, and can easily implement priority comparison wait operations for multiple threads, such as writing multiple WaitForSingleObject to replace waitformultipleobjects and making programming more flexible.

Mutex (mutex)

The mutex is very similar to the critical section, and only the line that owns the mutex Cheng has access to the resource, and because the mutex has only one, it determines that the shared resource will not be accessed by multiple threads at the same time. The thread that currently occupies the resource should hand over the mutex that it owns after the task has been processed so that other threads can access the resource after it is acquired. The mutex is more complex than the critical region. Because mutual exclusion not only enables the secure sharing of resources within different threads of the same application, but also enables secure sharing of resources between threads of different applications.

Several operations primitives contained in the mutex:
CreateMutex () Create a mutex
OpenMutex () Open a mutex
ReleaseMutex () Free Mutex
WaitForMultipleObjects () Wait for mutex object

signal Volume (semaphores)

Semaphore objects are synchronized to threads in a way that allows multiple threads to use shared resources at the same time as the PV operation in the operating system, unlike the previous methods. It indicates the maximum number of threads accessing the shared resource at the same time. It allows multiple threads to access the same resource at the same time, but needs to limit the maximum number of threads that access the resource at the same time. When you create a semaphore with CreateSemaphore (), you indicate both the maximum allowable resource count and the current count of available resources. Typically, the current available resource count is set to the maximum resource count, with each additional thread accessing the shared resource, the current available resource count is reduced by 1, and semaphore signals can be emitted as long as the current available resource count is greater than 0. However, when the current available count is reduced to 0 o'clock, the number of threads currently consuming resources has reached the maximum allowable number and cannot allow other threads to enter, at which point the semaphore signal will not be emitted. After the thread finishes processing the shared resource, it should pass the ReleaseSemaphore () function to add the current available resource count to 1 while leaving. The current count of available resources is never likely to be greater than the maximum resource count at any time.

The concept of PV operation and semaphore is proposed by E.w.dijkstra, a Dutch scientist. Semaphore S is an integer, s greater than equals zero represents the number of resource entities available to the concurrent process, but s less than zero indicates the number of processes waiting to use the shared resource.

P Operation Request Resources:
(1) s minus 1;
(2) If s minus 1 is still greater than or equal to zero, then the process continues to execute;
(3) If s minus 1 is less than 0, the process is blocked into a queue corresponding to the signal and then transferred to the process schedule.

V Operation Frees Resources:
(1) s plus 1;
(2) If the addition result is greater than 0, the process continues to execute;
(3) If the addition result is less than or equal to zero, then a waiting process is awakened from the waiting queue of the signal, which is then returned to the original process for execution or transfer to the process schedule.

Several operational primitives contained in the semaphore:
CreateSemaphore () Create a semaphore
OpenSemaphore () opens a semaphore
ReleaseSemaphore () releasing semaphore
WaitForSingleObject () waiting signal volume

The use of semaphores makes it more suitable for synchronizing the threads of socket (socket) programs. For example, if an HTTP server on a network restricts the number of users accessing the same page at the same time, you can set a thread for each user's page request to the server, and the page is a shared resource to be protected. Synchronizing a thread by using semaphores ensures that no matter how many users have access to a page at any one time, only the maximum number of users that are not greater than the set is allowed to access the thread, while other access attempts are suspended and may enter only after a user has exited access to the page.

Because they are very similar in their use, I'll combine them with a simple example: #include "stdafx.h"
#include "Windows.h"
#include "stdio.h"

volatile int threaddata = 1;

Critical_section Csprint; Critical Zone
HANDLE Evtprint; Event signal, marking whether an event has occurred
HANDLE Mtxprint; Mutual exclusion signal, if a signal indicates that the thread has entered the critical section and owns the signal
HANDLE Smphprint; Semaphore indicating whether the maximum allowable number of threads has been reached

void Print (char * str)
{
EnterCriticalSection (&csprint); Enter the critical section
WaitForSingleObject (Evtprint, INFINITE); Waiting for an event to signal
WaitForSingleObject (Mtxprint, INFINITE); Wait for the mutex to be empty (no threads have it)
WaitForSingleObject (Smphprint, INFINITE); Waiting for a shared resource request to be passed equals p action

for (; *str!= '); str

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.