Synchronization and mutex Control Methods of four processes or threads in VC ++

Source: Internet
Author: User

Currently, the popular process thread synchronization and mutex control mechanism is actually implemented by the four most primitive and basic methods. The combination of these four methods enables flexible and Variable Thread process control methods under. NET and Java.

The four methods are defined as follows: in the operating system tutorial ISBN 7-5053-6193-7, you can find a more detailed explanation.

1. critical section: access public resources or segments through multi-thread serializationCodeFast, suitable for controlling data access.

2 mutex: designed to coordinate separate access to a shared resource.

3 semaphores: designed to control a limited number of user resources.

Four things: used to notify the thread that some events have occurred and start the subsequent task.

1. Critical Section)

 

A convenient way to ensure that only one thread can access data at a certain time point. Only one thread is allowed to access Shared resources at any time. If multiple threads attempt to access the critical section at the same time, all other threads attempting to access the critical section will be suspended and will continue until the thread enters the critical section. After the critical section is released, other threads can continue to seize it and use the atomic method to share resources.

The critical section contains two operation primitives: entercriticalsection (). The leavecriticalsection () in the critical section leaves the critical section.

after the entercriticalsection () Statement is executed, no matter what happens after the Code enters the critical section, make sure that the matching leavecriticalsection () can be executed. Otherwise, the shared resources protected in the critical section will never be released. Although the synchronization speed in the critical section is very fast, it can only be used to synchronize threads in the current process, but not to synchronize threads in multiple processes.

MFC provides many functions and complete classes. I use MFC to implement the critical section. MFC provides a class of ccriticalsection for the critical section. It is very easy to use this class for thread synchronization. You only need to use the ccriticalsection class member functions lock () and unlock () in the thread function to calibrate the protected code snippet. Resources used by the code after the lock () are automatically considered to be protected in the critical section. After unlock, other threads can access these resources.

// Criticalsection
Ccriticalsection global_criticalsection;

// Share resources
Char global_array [256];

// Initialize shared resources
Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}

// Write thread
Uint global_threadwrite (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Enter the critical section
Global_criticalsection.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = W;
PTR-> setwindowtext (global_array );
Sleep (10 );
}

// Exit the critical section
Global_criticalsection.unlock ();
Return 0;
}

// Delete a thread
Uint global_threaddelete (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Enter the critical section
Global_criticalsection.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = D;
PTR-> setwindowtext (global_array );
Sleep (10 );
}

// Exit the critical section
Global_criticalsection.unlock ();
Return 0;
}

// Create a thread and start the thread
Void ccriticalsectionsdlg: onbnclickedbuttonlock ()
{
// Start the first thread
Cwinthread * ptrwrite = afxbeginthread (global_threadwrite,
& M_write,
Thread_priority_normal,
0,
Create_suincluded );
Ptrwrite-> resumethread ();

// Start the second thread
Cwinthread * ptrdelete = afxbeginthread (global_threaddelete,
& M_delete,
Thread_priority_normal,
0,
Create_suincluded );
Ptrdelete-> resumethread ();
}

In the test Program In, the Lock unlock buttons are implemented separately. The execution status of shared resources is protected in the critical section, and the execution status of shared resources is protected in the non-critical section.

 

2. mutex)

The mutex is similar to that in the critical section. Only threads with mutex objects have the permission to access resources. Because there is only one mutex object, therefore, it is determined that the shared resource will not be accessed by multiple threads at the same time under any circumstances. The thread occupying the resource should hand over the mutex object after the task is processed, so that other threads can access the resource after obtaining it. Mutex is more complex than that in the critical section. Because mutex can not only achieve secure resource sharing in different threads of the same application, but also achieve secure resource sharing among threads of different applications.

The mutex contains several operation primitives:
Createmutex () creates a mutex
Openmutex () opens a mutex
Releasemutex () releases mutex
Waitformultipleobjects () waits for the mutex object

Similarly, MFC provides a cmutex class for mutex. It is very easy to use the cmutex class to implement mutex operations, but pay special attention to calling the cmutex constructor.

Cmutex (bool binitiallyown = false, lpctstr lpszname = NULL, lpsecurity_attributes lpsaattribute = NULL)

You cannot enter unnecessary parameters. If you enter unnecessary parameters, unexpected running results may occur.

// Create mutex
Cmutex global_mutex (0, 0 );

// Share resources
Char global_array [256];

Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}
Uint global_threadwrite (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Global_mutex.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = W;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Global_mutex.unlock ();
Return 0;
}

Uint global_threaddelete (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Global_mutex.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = D;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Global_mutex.unlock ();
Return 0;
}

Similarly, in the test program, the Lock unlock buttons are implemented respectively. The execution status of shared resources is protected by mutex, And the execution status of shared resources is protected by no mutex.


3. semaphores)

 

The method for synchronizing semaphore objects to threads is different from the previous methods. Signals allow multiple threads to use shared resources at the same time, which is the same as PV operations in the operating system. It specifies the maximum number of threads simultaneously accessing shared resources. It allows multiple threads to access the same resource at the same time, but it needs to limit the maximum number of threads that can access the resource at the same time. When using createsemaphore () to create a semaphore, you must specify the maximum allowed resource count and the current available resource count. Generally, the current available resource count is set to the maximum Resource Count. Each time a thread is added to access a shared resource, the current available resource count is reduced by 1, as long as the current available resource count is greater than 0, a semaphore signal can be sent. However, when the current available count is reduced to 0, it indicates that the number of threads currently occupying resources has reached the maximum allowed number, and other threads cannot enter, at this time, the semaphore signal cannot be sent. After processing shared resources, the thread should use the releasesemaphore () function to increase the number of currently available resources by 1 while leaving. The current available resource count cannot exceed the maximum resource count at any time.

PV operations and semaphores are all proposed by Dutch scientist E. W. Dijkstra. Semaphore s is an integer. When S is greater than or equal to zero, the number of resource entities available for concurrent processes in the table. If S is less than zero, it indicates the number of processes waiting to use shared resources.

P operation resource application:
(1) s minus 1;
(2) If s minus 1 is still greater than or equal to zero, the process continues to run;
(3) If s minus 1 is less than zero, the process is blocked and enters the queue corresponding to the signal, and then transferred to the process scheduling.

V:
(1) s plus 1;
(2) if the sum result is greater than zero, the process continues to execute;
(3) if the sum result is less than or equal to zero, a waiting process is awakened from the waiting queue of the signal, and then the original process is returned for further execution or transfer to process scheduling.
several operation primitives contained in the semaphore:
createsemaphore () create a semaphore
opensemaphore () open a semaphore
releasesemaphore () release semaphores
waitforsingleobject () Wait semaphores
// semaphores handle
handle global_semephore;
// share resources
char global_array [256];
void initializearray ()
{< br> for (INT I = 0; I <256; I ++)
{< br> global_array [I] = I;
}< BR >}

// Thread 1
Uint global_threadone (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Wait for the request to share the resource to be processed by equal to P
Waitforsingleobject (global_semephore, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = O;
PTR-> setwindowtext (global_array );
Sleep (10 );
}

// Release shared resources to perform operations equal to V
Releasesemaphore (global_semephore, 1, null );
Return 0;
}

Uint global_threadtwo (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Waitforsingleobject (global_semephore, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = T;
PTR-> setwindowtext (global_array );
Sleep

(10 );
}
Releasesemaphore (global_semephore, 1, null );
Return 0;
}

Uint global_threadthree (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Waitforsingleobject (global_semephore, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = h;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Releasesemaphore (global_semephore, 1, null );
Return 0;
}

Void csemaphoredlg: onbnclickedbuttonone ()
{

// Set semaphores 1 Resource 1 and only one thread can be accessed at the same time
Global_semephore = createsemaphore (null, 1, 1, null );
This-> startthread ();

// Todo: add your control notification handler code here
}

Void csemaphoredlg: onbnclickedbuttontwo ()
{

// Set two semaphores. 2. Only two threads can access the semaphores.
Global_semephore = createsemaphore (null, 2, 2, null );
This-> startthread ();

// Todo: add your control notification handler code here
}

Void csemaphoredlg: onbnclickedbuttonthree ()
{

// Set three semaphores. 3. Only three threads can access the semaphores.
Global_semephore = createsemaphore (null, 3, 3, null );
This-> startthread ();

// Todo: add your control notification handler code here
}

The usage of semaphores makes it more suitable for synchronization of threads in socket (socket) programs. For example, if the HTTP server on the network needs to limit the number of users who access the same page at the same time, you can set a thread for each user to request the page on the server, the page is the shared resource to be protected. By using semaphores to synchronize threads, users can access a page no matter how many users at any time, only threads with the maximum number of users can access this page, while other access attempts are suspended. This page can only be accessed after a user exits.

 


4. Events)

Event objects can also be synchronized by means of notification operations. In addition, threads in different processes can be synchronized.

The semaphore contains several operation primitives:
Createevent () to create a semaphore
Openevent () opens an event
Setevent () reset event
Waitforsingleobject () waits for an event
Waitformultipleobjects () waits for multiple events

Waitformultipleobjects function prototype:
Waitformultipleobjects (
In DWORD ncount, // number of pending handles
In const handle * lphandles, // point to the handle Array
In bool bwaitall, // indicates whether to wait completely
In DWORD dwmilliseconds // wait time
)

the ncount parameter specifies the number of kernel objects to wait for. The array of these kernel objects is pointed by lphandles. Fwaitall specifies the two waiting methods for the specified ncount kernel object. If it is true, the function returns only when all objects are notified, if this parameter is set to false, only one of them is notified. Dwmilliseconds serves exactly the same purpose as waitforsingleobject. If the wait times out, the function returns wait_timeout.
// event array
handle global_events [2];
// share resources
char global_array [256];
void initializearray ()
{< br> for (INT I = 0; I <256; I ++)
{< br> global_array [I] = I;
}< BR >}< br>
uint global_threadone (lpvoid pparam)
{< br> cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
for (INT I = 0; I <256; I ++)
{< br> global_array [I] = O;
PTR-> setwindowtext (global_array );
sleep (10);
}

// Reset event
Setevent (global_events [0]);
Return 0;
}

Uint global_threadtwo (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
For (INT I = 0; I <256; I ++)
{
Global_array [I] = T;
PTR-> setwindowtext (global_array );
Sleep (10 );
}

// Reset event
Setevent (global_events [1]);
Return 0;
}

Uint global_threadthree (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");

// wait for both events to be reset
waitformultipleobjects (2, global_events, true, infinite);
for (INT I = 0; I <256; I ++)
{< br> global_array [I] = H;
PTR-> setwindowtext (global_array);
sleep (10 );
}< br> return 0;
}< br> void ceventdlg: onbnclickedbuttonstart ()
{< br> for (INT I = 0; I <2; I ++)
{

// instantiate the event
global_events [I] = createevent (null, false, false, null );
}< br> cwinthread * ptrone = afxbeginthread (global_threadone,
& m_one,
thread_priority_normal,
0,
create_suincluded );
ptrone-> resumethread ();
// start the second thread
cwinthread * ptrtwo = afxbeginthread (global_threadtwo,
& m_two,
thread_priority_normal,
0,
create_suincluded);
ptrtwo-> resumethread ();
// start the third thread
cwinthread * ptrthree = afxbeginthread (global_threadthree,
& m_three,
thread_priority_normal,
0,
create_suincluded);
ptrthree-> resumethread ();

// Todo: add your control notification handler code here
}

Events can be used to synchronize threads in different processes, and multiple threads can be conveniently prioritized and waited for. For example, multiple waitforsingleobjects can be written to replace waitformultipleobjects to make programming more flexible.

 

Summary:

1. The mutex function is very similar to that of the critical zone, but the mutex can be named, that is, it can be used across processes. Therefore, creating mutex requires more resources. Therefore, if you only use it within a process, using the critical section will bring speed advantages and reduce resource occupation. Because the mutex is a cross-process mutex, once created, it can be opened by name.

2. mutex, semaphore, and event can all be used by a process to synchronize data. Other objects have nothing to do with data synchronization, but for the process and thread, if the process and thread are in the running status, there is no signal, and there is a signal after exiting. Therefore, you can use waitforsingleobject to wait for the process and thread to exit.

3. you can specify how resources are exclusive by means of mutex. However, if the following problem occurs, the resource cannot be processed by means of mutex, for example, if a user buys a database system with three concurrent access licenses, the user can decide how many threads/processes can perform database operations at the same time based on the number of access licenses purchased by the user, at this time, if the mutex is used, there is no way to fulfill this requirement. The traffic signal object can be said to be a resource counter.

Reprinted statement: This article from http://hi.baidu.com/%B6%AC%D2%E2%BE%D3/blog/item/c34d7f398400aefe3a87cec9.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.