In-depth introduction to thread communication in Win32 multi-threaded programming

Source: Internet
Author: User

Introduction

The two basic problems of inter-thread communication are mutual exclusion and synchronization.

Thread Synchronization is a type of constraint between threads. The execution of one thread depends on messages of another thread. When it does not receive messages from another thread, it should wait, it is not awakened until the message arrives.

Thread mutex refers to the sharing of operating system resources (refers to the broad sense of "resources", rather than windows. res file. For example, a global variable is a shared resource. When several threads use a shared resource, only one thread can be used at most at any time. Other threads that need to use this resource must wait, wait until the resource owner releases the resource.

Thread mutex is a special type of thread synchronization.

In fact, mutex and synchronization correspond to two scenarios of Inter-thread communication:

(1) When multiple threads access shared resources and do not destroy the resources;

(2) When one thread needs to notify another or more threads of the completion of a task.

In Win32, the synchronization mechanism mainly includes the following:

(1) events );

(2) semaphores (semaphore );

(3) mutex );

(4) critical section ).

Global Variables

Because all threads in a process can access all global variables, global variables become the simplest way for Win32 Multi-thread communication. For example:

Int var; // global variable
Uint threadfunction (lpvoidpparam)
{
Var = 0;
While (VAR <maxvalue)
{
// Thread processing
: Interlockedincrement (long *) & var );
}
Return 0;
}
See the followingProgram:
Int globalflag = false;
DWORD winapi threadfunc (lpvoid N)
{
Sleep (2000 );
Globalflag = true;

Return 0;
}

Int main ()
{
Handle hthrd;
DWORD threadid;

Hthrd = createthread (null, 0, threadfunc, null, 0, & threadid );
If (hthrd)
{
Printf ("thread launched \ n ");
Closehandle (hthrd );
}

While (! Globalflag)
;
Printf ("Exit \ n ");
}

The above program uses global variables and while loop queries for thread synchronization. In fact, this is a method to avoid, because:

(1) When the main thread must synchronize itself with the completion of the threadfunc function, it does not bring itself into sleep. Because the main thread does not enter the sleep state, the operating system continues to schedule the c p u time for it, which requires the precious time cycle of other threads;

(2) When the priority of the main thread is higher than that of the thread that executes the threadfunc function, globalflag will never be assigned true. In this case, the system will never allocate any time slice to the threadfunc thread.

Event

Events are the most flexible way to synchronize between threads provided by Win32. events can be in the signaled or true or unfired state (unsignal or false ). Events can be divided into two types based on the state transition mode:

(1) manual setting: this type of object can only be set manually by the program. When this event or event is required, setevent and resetevent are used for setting.

(2) Automatic Recovery: once an event occurs and is processed, it is automatically restored to no event status and does not need to be set again.

The function prototype for creating an event is:

Handle createevent (
Lpsecurity_attributes lpeventattributes,
// Security_attributes structure pointer, which can be null
Bool bmanualreset,
// Manual/automatic
// True: After waitforsingleobject, you must manually call resetevent to clear the signal.
// False: After waitforsingleobject, the system automatically clears the event signal.
Bool binitialstate, // initial state
Lptstr lpname // event name
);

Note the following when using the "event" mechanism:

(1) If an event is accessed across processes, the event must be named. when naming the event, be sure not to conflict with other global naming objects in the system namespace;

(2) Whether the event needs to be automatically restored;

(3) set the initial status of the event.

Because the event object belongs to the kernel object, process B can call the openevent function to obtain the event object handle in process a by using the object name, use this handle in functions such as resetevent, setevent, and waitformultipleobjects. This method can be used to control the running of threads in another process. For example:

Handle hevent = openevent (event_all_access, true, "myevent ");
Resetevent (hevent );

Critical Section

Define critical zone Variables

Critical_section extends iticalsection;

Generally, the critical_section struct should be defined as a global variable, so that all threads in the process can reference the struct conveniently according to the variable name.

Initialize critical section

Void winapi initializecriticalsection (
Lpcritical_section lpcriticalsection
// Point to the critical_section variable defined by the programmer
);

This function is used to initialize the critical_section struct in PCs. This function only sets some member variables, so it generally does not fail to run, so it uses the void type return value. This function must be called before any thread calls the entercriticalsection function. If a thread tries to enter an uninitialized crtical_section, the result is hard to predict.

Delete critical section

Void winapi deletecriticalsection (
Lpcritical_section lpcriticalsection
// Point to a critical_section variable that is no longer needed
);

Enter critical section

Void winapi entercriticalsection (
Lpcritical_section lpcriticalsection
// Point to a critical_section variable you are about to lock
);

Exit critical section

Void winapi leavecriticalsection (
Lpcritical_section lpcriticalsection
// Point to a critical_section variable you are about to leave
);

The general method for programming using the critical section is:

Void updatedata ()
{
Entercriticalsection (& javasiticalsection );
... // Do something
Leavecriticalsection (& policiticalsection );
}

Note the following when using the critical section:

(1) Use a critical_section variable for each shared resource;

(2) do not run the key for a long timeCodeWhen a key code segment is running for a long time, other threads will enter the waiting state, which will reduce the running performance of the application;

(3) If you need to access multiple resources at the same time, you may call entercriticalsection continuously;

(4) The critical section is not the core object of the OS. If the thread that enters the critical section "hangs", the critical resource cannot be released. This disadvantage is compensated in mutex.

Mutual Exclusion

The mutex is used to ensure that only one thread can obtain the mutex at a time and continue execution. The createmutex function is used to create the mutex:

Handle createmutex (
Lpsecurity_attributes lpmutexattributes,
// Security Attribute structure pointer, which can be null
Bool binitialowner,
// Whether the exclusive quantity exists; true: exclusive; false: exclusive
Lptstr lpname
// Semaphore name
);

Mutex is a core object that can be accessed across processes. The following Code provides an example of accessing the mutex name from another process:

Handle hmutex;
Hmutex = openmutex (mutex_all_access, false, l "mutexname ");
If (hmutex ){
...
}
Else {
...
}

Related APIs:

Bool winapi releasemutex (
Handle hmutex
);

The general method for mutex programming is:

Void updateresource ()
{
Waitforsingleobject (hmutex ,...);
... // Do something
Releasemutex (hmutex );
}

The mutex Kernel Object ensures that the thread has mutex access to a single resource. The behavior of the mutex object is the same as that of the critical section, but the mutex object belongs to the kernel object and the critical section belongs to the user mode object. Therefore, mutex and critical section are different as follows:

(1) mutex objects run slowly than key code segments;

(2) multiple threads in different processes can access a single mutex object;

(3) A timeout value can be set when the thread is waiting to access the resource.

The differences between mutex and critical section are listed in detail:

Semaphores

Semaphores are synchronization objects that maintain the range from 0 to the specified maximum. When the semaphore state is greater than 0, there is a signal, and when the count is 0, there is no signal. The semaphore object supports access to a limited number of shared resources.

The features and usage of semaphores can be defined in the following statements:

(1) If the number of current resources is greater than 0, the semaphore is valid;

(2) If the current number of resources is 0, the semaphore is invalid;

(3) The system will never allow the number of current resources to be negative;

(4) The current resource quantity cannot exceed the maximum resource quantity.

Create semaphores

Handle createsemaphore (
Psecurity_attribute PSA,
Long linitialcount, // number of resources available at the beginning
Long lmaximumcount, // maximum number of resources
Pctstr pszname );

Release semaphores

By calling the releasesemaphore function, the thread can increase the number of current resources of the beacon. The prototype of this function is:

Bool winapi releasesemaphore (
Handle hsemaphore,
Long lreleasecount, // the current number of resources in the semaphore increases by lreleasecount
Lplong lppreviouscount
);

Enable semaphores

Like other core objects, semaphores can also be accessed by name across processes. The semaphore opening API is as follows:

Handle opensemaphore (
DWORD fdwaccess,
Bool binherithandle,
Pctstr pszname
);

Mutual lock access

When a single value must be modified using an atomic operation, the interlock function is quite useful. The so-called Atomic access means that when a thread accesses a resource, it can ensure that all other threads do not access the same resource at the same time.

See the following code:

Int globalvar = 0;

DWORD winapi threadfunc1 (lpvoid N)
{
Globalvar ++;
Return 0;
}
DWORD winapi threadfunc2 (lpvoid N)
{
Globalvar ++;
Return 0;
}

The results of running the threadfunc1 and threadfunc2 threads are unpredictable, because globalvar ++ does not correspond to a machine command. Let's look at the disassembly code of globalvar ++:

00401038 mov eax, [globalvar (0042d3f0)]
0040103d add eax, 1
00401040 mov [globalvar (0042d3f0)], eax

In the "mov eax, [globalvar (0042d3f0)]" command and "add eax, 1" command and "add eax, 1" command and "mov [globalvar (0042d3f0)], thread switching between eax commands may occur, making the result of globalvar uncertain after the program is executed. We can use the interlockedexchangeadd function to solve this problem:

Int globalvar = 0;

DWORD winapi threadfunc1 (lpvoid N)
{
Interlockedexchangeadd (& globalvar, 1 );
Return 0;
}
DWORD winapi threadfunc2 (lpvoid N)
{
Interlockedexchangeadd (& globalvar, 1 );
Return 0;
}

Interlockedexchangeadd ensures that the access to the globalvar variable is "Atomic ". The speed of mutual lock access control is very fast. The CPU cycle of calling an interlock function is usually less than 50, you do not need to switch between the user mode and the kernel mode (this switch usually requires 1000 CPU cycles ).

The disadvantage of the mutual lock access function is that it can only perform atomic access to a single variable. If the resource to be accessed is complex, you still need to use the critical section or mutex.

Wait for the timer

Waiting for a timer is a kernel object that sends its own signal notification at a certain time or at a specified interval. They are usually used to execute an operation at a certain time.

Create a wait Timer

Handle createwaitabletimer (
Psecurity_attrisutes PSA,
Bool fmanualreset, // manually reset or automatically reset the timer
Pctstr pszname );

Set the wait Timer

Wait until the timer object is created in the inactive state. The programmer should call the setwaitabletimer function to define when the timer is activated:

Bool setwaitabletimer (
Handle htimer, // the timer to be set
Const large_integer * pduetime, // specifies the time when the timer is activated for the first time.
Long lperiod, // specifies the interval at which the timer should be activated.
Ptimerapcroutine pfncompletionroutine,
Pvoid pvargtocompletionroutine,
Bool fresume );

Cancel wait Timer

Bool cancel waitabletimer (
Handle htimer // the timer to be canceled
);

Enable the wait Timer

As a kernel object, waitabletimer can also be opened by other processes by name:

Handle openwaitabletimer (
DWORD fdwaccess,
Bool binherithandle,
Pctstr pszname
);

Instance

The following shows a possible deadlock in a program:

# Include <windows. h>
# Include <stdio. h>
Critical_section CS1, CS2;
Long winapi threadfn (long );
Main ()
{
Long ithreadid;
Initializecriticalsection (& CS1 );
Initializecriticalsection (& CS2 );
Closehandle (createthread (null, 0, (lpthread_start_routine) threadfn, null, 0, & ithreadid ));
While (true)
{
Entercriticalsection (& CS1 );
Printf ("\ n thread 1 occupies critical section 1 ");
Entercriticalsection (& CS2 );
Printf ("\ n thread 1 occupies critical section 2 ");

Printf ("\ n thread 1 occupies two critical zones ");

Leavecriticalsection (& CS2 );
Leavecriticalsection (& CS1 );

Printf ("\ n thread 1 releases two critical zones ");
Sleep (20 );
};
Return (0 );
}

Long winapi threadfn (long lparam)
{
While (true)
{
Entercriticalsection (& CS2 );
Printf ("\ n thread 2 occupies critical section 2 ");
Entercriticalsection (& CS1 );
Printf ("\ n thread 2 occupies critical section 1 ");

Printf ("\ n thread 2 occupies two critical zones ");

Leavecriticalsection (& CS1 );
Leavecriticalsection (& CS2 );

Printf ("\ n thread 2 releases two critical zones ");
Sleep (20 );
};
}

Run this program. Once such output occurs in the middle of the process:

Thread 1 occupies critical zone 1

Thread 2 occupies critical section 2

Or

Thread 2 occupies critical section 2

Thread 1 occupies critical zone 1

Or

Thread 1 occupies Critical Zone 2

Thread 2 occupies critical zone 1

Or

Thread 2 occupies critical zone 1

Thread 1 occupies Critical Zone 2

The program is "dead" and cannot run any more. Because of this output, two threads wait for each other to release the critical zone, that is, a deadlock occurs.

If we change the control function of thread 2:

Long winapi threadfn (long lparam)
{
While (true)
{
Entercriticalsection (& CS1 );
Printf ("\ n thread 2 occupies critical section 1 ");
Entercriticalsection (& CS2 );
Printf ("\ n thread 2 occupies critical section 2 ");

Printf ("\ n thread 2 occupies two critical zones ");

Leavecriticalsection (& CS1 );
Leavecriticalsection (& CS2 );

Printf ("\ n thread 2 releases two critical zones ");
Sleep (20 );
};
}

Run the program again. The deadlock is eliminated and the program is no longer blocked. This is because we have changed the order in which thread 2 obtains critical zones 1 and 2, eliminating the possibility that threads 1 and 2 are waiting for resources.

We can conclude that when using the synchronization mechanism between threads, pay special attention to the occurrence of deadlocks.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.