Four ways to synchronize C + + threads (Windows)

Source: Internet
Author: User
Tags function prototype mutex semaphore

Why thread synchronization?

When using multiple threads in a program, there are generally few threads that can perform completely independent operations during their lifetimes. There are more cases where some threads do some processing, and other threads must understand their processing results. A normal understanding of the results of such processing should be done after the completion of their processing tasks.
Without proper action, other threads tend to access processing results before the end of the thread processing task, which is likely to get a bad understanding of the processing results. For example, multiple threads access the same global variable at the same time, and if all are read operations, the problem does not occur. If a thread is responsible for changing the value of this variable, while other threads are responsible for reading the contents of the variable at the same time, there is no guarantee that the data read is modified by the write thread.
To ensure that a read thread reads a modified variable, you must prohibit any access to it by other threads when the data is written to the variable, until the assignment process ends and then the access restrictions on other threads are lifted. This ensures that threads are synchronized with the protection that the thread can take to understand the results of the processing of other threads ' tasks.

code example:
Two threads simultaneously add operation to a global variable, demonstrating the situation of multi-threaded resource access violation.

#include "stdafx.h" #include <windows.h> #include <iostream>using namespace std;int number = 1;unsigned Long _ _stdcall ThreadProc1 (void* LP) {While    (number <)    {        cout << "Thread 1:" <<number << E NDL;        ++number;        _sleep (+);    }    return 0;} unsigned long __stdcall THREADPROC2 (void* LP) {While    (number <)    {        cout << "Thread 2:" <<nu Mber << Endl;        ++number;        _sleep (+);    }    return 0;} int main () {    createthread (null, 0, ThreadProc1, NULL, 0, NULL);    CreateThread (null, 0, THREADPROC2, NULL, 0, NULL);    Sleep (10*1000);    System ("pause");    return 0;}

Operation Result:

You can see that sometimes two threads calculate the same value, not the result we want.

About thread synchronization

The two basic issues of communication between threads are mutual exclusion and synchronization.

    • thread synchronization is a constraint relationship between threads, where one thread's execution relies on another thread's message, and waits until the message arrives when it does not get another thread's message.
    • thread Mutex refers to the shared operating system resources (referred to as the generalized "resources", rather than the Windows. res file, such as a global variable is a shared resource), which is exclusive when each thread accesses it. When there are several threads that want to use a shared resource, at most one thread is allowed to use at most, and other threads that want to use that resource must wait until the resource is freed by the resource.

Thread mutex is a special kind of thread synchronization. In fact, mutual exclusion and synchronization correspond to two situations in which inter-thread communication occurs:

    • When there are multiple threads accessing the shared resource without destroying the resource;
    • When a thread needs to notify another or more threads of the completion of a task.

On the big side, thread synchronization can be divided into two categories: thread synchronization in user mode and thread synchronization of kernel objects.

    • The synchronous method of the thread in the user mode mainly includes atomic access and critical area method. It is characterized by synchronization speed is particularly fast, suitable for the thread running speed has strict requirements of the occasion.
    • Thread synchronization of kernel objects is mainly composed of kernel objects such as events, wait timers, semaphores, and semaphores. Because this synchronization mechanism uses kernel objects, it is necessary to switch threads from user mode to kernel mode, which typically consumes nearly thousands of CPU cycles, so synchronization is slower, but much better than user mode thread synchronization in terms of applicability.

In WIN32, there are several main synchronization mechanisms:
(1) Events (event);
(2) Signal volume (semaphore);
(3) Mutex (mutex);
(4) Critical area (Critical section).

Critical section of four synchronization modes in Win32

A critical section is a piece of code that Critical access to some shared resources, allowing only one thread to access the shared resource at any time. If more than one thread attempts to access the critical section at the same time, all other threads that attempt to access this critical section after one thread has entered will be suspended and continue until the thread entering the critical section leaves. When the critical section is released, other threads can continue to preempt, and in this way achieve the purpose of atomic manipulation of shared resources.

The critical section uses the CRITICAL_SECTION structure object to protect shared resources when used, and uses the EnterCriticalSection () and leavecriticalsection () functions respectively to identify and release a critical section. The CRITICAL_SECTION structure object used must be initialized by InitializeCriticalSection () before it can be used, and you must ensure that any code in any thread that attempts to access this shared resource is protected by this critical section. Otherwise the critical section will not play its rightful role, and the shared resources may still be compromised.

code example:

 #include" stdafx.h "#include <windows.h> #include <iostream>using namespace Std;int number = 1;      Define global variables critical_section Critical; Defines a critical section handle unsigned long __stdcall ThreadProc1 (void* LP) {while (number <) {EnterCriticalSection (&c        ritical);        cout << "Thread 1:" <<number << Endl;        ++number;        _sleep (100);    LeaveCriticalSection (&critical); } return 0;}         unsigned long __stdcall THREADPROC2 (void* LP) {while (number <) {entercriticalsection (&critical);        cout << "Thread 2:" <<number << Endl;        ++number;        _sleep (100);    LeaveCriticalSection (&critical); } return 0;}   int main () {initializecriticalsection (&critical);    Initializes the critical section object CreateThread (null, 0, ThreadProc1, NULL, 0, NULL);    CreateThread (null, 0, THREADPROC2, NULL, 0, NULL);    Sleep (10*1000);    System ("pause"); return 0;} 

Operation Result:

It can be seen that the sequential output is realized, and the thread synchronization is realized.

Event

Event is the most flexible inter-thread synchronization provided by WIN32, and events can be in the firing state (signaled or true) or in the non-firing state (unsignal or false). Depending on how the state changes, events can be divided into two categories:
(1) Manual setting: This kind of object can only be set manually by the program, use SetEvent and resetevent to set when need this event or event to occur.
(2) Auto-recovery: Once the event has occurred and is processed, it is automatically restored to no event state and does not need to be set again.

Use the event mechanism to be aware of the following:
(1) If the event is accessed across processes, you must name the event, and when naming the event, be careful not to conflict with other global naming objects in the System namespace;
(2) Whether the event should be resumed automatically;
(3) The initial state setting of the event.

Because the event object is a kernel object, process B can call the OpenEvent function to obtain a handle to the event object in process A through the name of the object, and then use the handle for the ResetEvent, Functions such as SetEvent and WaitForMultipleObjects. This method enables the thread of one process to control the execution of threads in another process, for example:

HANDLE hevent=openevent (event_all_access,true, "MyEvent"); ResetEvent (hevent);

code example:

 #include" stdafx.h "#include <windows.h> #include <iostream>using namespace Std;int number = 1;  Define global variables handle hevent; Define event handle unsigned long __stdcall ThreadProc1 (void* LP) {while (number <) {WaitForSingleObject (hevent,  INFINITE);        Wait for the object to have signaled status cout << "Thread 1:" <<number << Endl;        ++number;        _sleep (100);    SetEvent (hevent); } return 0;} unsigned long __stdcall THREADPROC2 (void* LP) {while (number <) {WaitForSingleObject (hevent, INFINITE  );        Wait for the object to have signaled status cout << "Thread 2:" <<number << Endl;        ++number;        _sleep (100);    SetEvent (hevent); } return 0;}    int main () {CreateThread (null, 0, ThreadProc1, NULL, 0, NULL);    CreateThread (null, 0, THREADPROC2, NULL, 0, NULL);    hevent = CreateEvent (NULL, FALSE, TRUE, "event");    Sleep (10*1000);    System ("pause"); return 0;} 

Operation Result:

It can be seen that the sequential output is realized, and the thread synchronization is realized.

Signal Volume

The semaphore is a synchronization object that maintains between 0 and the specified maximum value. The semaphore state is signaled when its count is greater than 0 o'clock, and its count is 0 o'clock is no signal. Semaphore objects can be controlled to support access to a limited number of shared resources.

The characteristics and uses of the semaphore can be defined in the following words:
(1) If the number of current resources is greater than 0, the semaphore is valid;
(2) If the current number of resources is 0, the semaphore is invalid;
(3) The system never allows the current number of resources to be negative;
(4) The current number of resources must never be greater than the maximum number of resources.

Create semaphore

The function prototypes are:

HANDLE CreateSemaphore (  psecurity_attribute PSA,//Semaphore security attribute  long lInitialCount,//start of the number of resources available for use  long lMaximumCount,//MAX Resources   pctstr pszname);     The name of the semaphore

  

Release semaphore

By calling the ReleaseSemaphore function, the thread is able to increment the current number of resources in the Beacon, the function prototype is:

BOOL WINAPI ReleaseSemaphore (  HANDLE hsemaphore,   //semaphore handle to increase  LONG lreleasecount,// The number of current resources of the semaphore increases lReleaseCount  Lplong lppreviouscount  //increment before the value is returned   );
Turn on Semaphore

As with other core objects, semaphores can be accessed across processes by name, and the API to open semaphores is:

HANDLE OpenSemaphore (  DWORD fdwaccess,      //access  BOOL binherithandle,  //set to True  if child processes are allowed to inherit handles PCTSTR Pszname  //Specifies the name of the object to be opened  );

code example:

#include "stdafx.h" #include <windows.h> #include <iostream>using namespace Std;int number = 1;  Define global variables handle hsemaphore;    Defines the semaphore handle unsigned long __stdcall ThreadProc1 (void* LP) {Long Count;  while (number <) {WaitForSingleObject (Hsemaphore, INFINITE);        Wait semaphore for signaled status cout << "Thread 1:" <<number << Endl;        ++number;        _sleep (100);    ReleaseSemaphore (Hsemaphore, 1, &count); } return 0;}    unsigned long __stdcall THREADPROC2 (void* LP) {Long Count;  while (number <) {WaitForSingleObject (Hsemaphore, INFINITE);        Wait semaphore for signaled status cout << "Thread 2:" <<number << Endl;        ++number;        _sleep (100);    ReleaseSemaphore (Hsemaphore, 1, &count); } return 0;}    int main () {Hsemaphore = CreateSemaphore (NULL, 1, +, "sema");    CreateThread (null, 0, ThreadProc1, NULL, 0, NULL);    CreateThread (null, 0, THREADPROC2, NULL, 0, NULL);    Sleep (10*1000); SystEM ("pause"); return 0;}

  

Operation Result:

It can be seen that the sequential output is realized, and the synchronization between threads is realized.

Mutex Amount

Adopt mutually exclusive object mechanism. Only the thread that owns the mutex has access to the public resource, because there is only one mutex object, so that the public resources are not accessed by multiple threads at the same time. Mutual exclusion can not only realize the common resources security sharing of the same application, but also can realize the security sharing of common resources of different applications.

code example:

 #include" stdafx.h "#include <windows.h> #include <iostream>using namespace Std;int number = 1;  Define global variables handle Hmutex; Defines a mutex object handle unsigned long __stdcall ThreadProc1 (void* LP) {while (number <) {WaitForSingleObject (Hmutex        , INFINITE);        cout << "Thread 1:" <<number << Endl;        ++number;        _sleep (100);    ReleaseMutex (Hmutex); } return 0;} unsigned long __stdcall THREADPROC2 (void* LP) {while (number <) {WaitForSingleObject (Hmutex, INFINITE        );        cout << "Thread 2:" <<number << Endl;        ++number;        _sleep (100);    ReleaseMutex (Hmutex); } return 0;}     int main () {Hmutex = CreateMutex (NULL, False, "mutex");    Create mutex object CreateThread (null, 0, ThreadProc1, NULL, 0, NULL);    CreateThread (null, 0, THREADPROC2, NULL, 0, NULL);    Sleep (10*1000);    System ("pause"); return 0;} 

Operation Result:

It can be seen that the sequential output is realized, and the thread synchronization is realized.

Resources:
[1] Http://blog.jobbole.com/109200/
[2] http://xiaoruanjian.iteye.com/blog/1092117

[3]74278765

Four ways to synchronize C + + threads (Windows)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.