Both the auxiliary thread and the user interface thread need to protect shared resources when accessing shared resources to avoid conflicts and errors. The processing method is similar to the use of Win32 API functions. However, MFC provides several synchronization object C ++ classes, namely csyncobject, cmutex, csemaphore, cevent, and ccriticalsection. Here, csyncobject is the base class of the other four classes, and the last four classes correspond to the four Win32 API synchronization objects mentioned above.
Generally, we use shared resources in the member functions of the C ++ object, or encapsulate the shared resources in the C ++ class. We can encapsulate the thread synchronization operation in the implementation functions of the object class, so that when the application thread uses the C ++ object, it can be used like a general object, it simplifies the compilation of some code, which is exactly the idea of object-oriented programming. Classes written in this way are called "thread security classes ". When designing the thread security class, you should first add a synchronization object class data member to the class according to the actual situation. Then, in the member functions of the class, all the places where the public data is modified or the public data is read must be added to the corresponding synchronous call. The general process is to create a csinglelock or cmultilock object and call its lock function. When the object ends, the unlock function is automatically called in the destructor. Of course, the unlock function can also be called in any desired place.
If you do not use shared resources in a specific c ++ object, but use shared resources in a specific function (such a function is called a "thread-safe function "), follow the steps described above: first create a synchronization object, then call the wait function until the resource can be accessed, and finally release the control on the synchronization object.
Next we will discuss the application scenarios of the four synchronization objects:
(1) If a thread must wait for some events to access the corresponding resources, use cevent;
(2) If an application can have multiple threads to access the corresponding resources at the same time, use csemaphore;
(3) If multiple applications (multiple processes) simultaneously access the corresponding resources, use cmutex; otherwise, use ccriticalsection.
Programming with thread security classes or thread security functions is more complex than programming without thread security considerations, especially when debugging is performed, we must use the debugging tools provided by Visual C ++ flexibly to ensure secure access to shared resources. Another disadvantage of thread-safe programming is that the running efficiency is relatively low, which may lead to a loss of efficiency even when a single thread is running. Therefore, we should analyze specific problems in actual work to select appropriate programming methods.
Explanations and detailed programming usage in OS:
1. critical section: accesses public resources or code segments through multi-thread serialization, which is fast and suitable for controlling data access.
2. mutex: designed to coordinate separate access to a shared resource.
3. semaphore: designed to control a limited number of user resources.
4. Event: it is used to notify the thread that some events have occurred and start subsequent tasks.
Critical Section)
A convenient way to ensure that only one thread can access data at a certain time point. Only one thread is allowed to access Shared resources at any time. If multiple threads attempt to access the critical section at the same time, all other threads attempting to access the critical section will be suspended and will continue until the thread enters the critical section. After the critical section is released, other threads can continue to seize it and use the atomic method to share resources.
The critical section contains two operation primitives: entercriticalsection (). The leavecriticalsection () in the critical section leaves the critical section.
After the entercriticalsection () Statement is executed, no matter what happens after the Code enters the critical section, make sure that the matching leavecriticalsection () can be executed. Otherwise, the shared resources protected in the critical section will never be released. Although the synchronization speed in the critical section is very fast, it can only be used to synchronize threads in the current process, but not to synchronize threads in multiple processes.
MFC provides many functions and complete classes. I use MFC to implement the critical section. MFC provides a class of ccriticalsection for the critical section. It is very easy to use this class for thread synchronization. You only need to use the ccriticalsection class member functions lock () and unlock () in the thread function to calibrate the protected code snippet. Resources used by the code after the lock () are automatically considered to be protected in the critical section. After unlock, other threads can access these resources.
// Criticalsection
Ccriticalsection global_criticalsection;
// Share resources
Char global_array [256];
// Initialize shared resources
Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}
// Write thread
Uint global_threadwrite (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Enter the critical section
Global_criticalsection.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = W;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Exit the critical section
Global_criticalsection.unlock ();
Return 0;
}
// Delete a thread
Uint global_threaddelete (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Enter the critical section
Global_criticalsection.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = D;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Exit the critical section
Global_criticalsection.unlock ();
Return 0;
}
// Create a thread and start the thread
Void ccriticalsectionsdlg: onbnclickedbuttonlock ()
{
// Start the first thread
Cwinthread * ptrwrite = afxbeginthread (global_threadwrite,
& M_write,
Thread_priority_normal,
0,
Create_suincluded );
Ptrwrite-> resumethread ();
// Start the second thread
Cwinthread * ptrdelete = afxbeginthread (global_threaddelete,
& M_delete,
Thread_priority_normal,
0,
Create_suincluded );
Ptrdelete-> resumethread ();
}
In the test program, the Lock unlock button is implemented to protect the execution status of shared resources in the critical section, and the execution status of shared resources in the non-critical section.
Program running result
Mutex)
The mutex is similar to that in the critical section. Only threads with mutex objects have the permission to access resources. Because there is only one mutex object, therefore, it is determined that the shared resource will not be accessed by multiple threads at the same time under any circumstances. The thread occupying the resource should hand over the mutex object after the task is processed, so that other threads can access the resource after obtaining it. Mutex is more complex than that in the critical section. Because mutex can not only achieve secure resource sharing in different threads of the same application, but also achieve secure resource sharing among threads of different applications.
The mutex contains several operation primitives:
Createmutex () creates a mutex
Openmutex () opens a mutex
Releasemutex () releases mutex
Waitformultipleobjects () waits for the mutex object
Similarly, MFC provides a cmutex class for mutex. It is very easy to use the cmutex class to implement mutex operations, but pay special attention to calling the cmutex constructor.
Cmutex (bool binitiallyown = false, lpctstr lpszname = NULL, lpsecurity_attributes lpsaattribute = NULL)
You cannot enter unnecessary parameters. If you enter unnecessary parameters, unexpected running results may occur.
// Create mutex
Cmutex global_mutex (0, 0 );
// Share resources
Char global_array [256];
Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}
Uint global_threadwrite (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Global_mutex.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = W;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Global_mutex.unlock ();
Return 0;
}
Uint global_threaddelete (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Global_mutex.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = D;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Global_mutex.unlock ();
Return 0;
}
Similarly, in the test program, the Lock unlock buttons are implemented respectively. The execution status of shared resources is protected by mutex, And the execution status of shared resources is protected by no mutex.
Program running result
Semaphores)
The method for synchronizing semaphore objects to threads is different from the previous methods. Signals allow multiple threads to use shared resources at the same time, which is the same as PV operations in the operating system. It specifies the maximum number of threads simultaneously accessing shared resources. It allows multiple threads to access the same resource at the same time, but it needs to limit the maximum number of threads that can access the resource at the same time. When using createsemaphore () to create a semaphore, you must specify the maximum allowed resource count and the current available resource count. Generally, the current available resource count is set to the maximum Resource Count. Each time a thread is added to access a shared resource, the current available resource count is reduced by 1, as long as the current available resource count is greater than 0, a semaphore signal can be sent. However, when the current available count is reduced to 0, it indicates that the number of threads currently occupying resources has reached the maximum allowed number, and other threads cannot enter, at this time, the semaphore signal cannot be sent. After processing shared resources, the thread should use the releasesemaphore () function to increase the number of currently available resources by 1 while leaving. The current available resource count cannot exceed the maximum resource count at any time.
PV operations and semaphores are all proposed by Dutch scientist E. W. Dijkstra. Semaphore s is an integer. When S is greater than or equal to zero, the number of resource entities available for concurrent processes in the table. If S is less than zero, it indicates the number of processes waiting to use shared resources.
P operation resource application:
(1) s minus 1;
(2) If s minus 1 is still greater than or equal to zero, the process continues to run;
(3) If s minus 1 is less than zero, the process is blocked and enters the queue corresponding to the signal, and then transferred to the process scheduling.
V operation to release resources:
(1) s plus 1;
(2) If the sum result is greater than zero, the process continues to execute;
(3) If the sum result is less than or equal to zero, a waiting process is awakened from the waiting queue of the signal, and then the original process is returned for further execution or transfer to process scheduling.
The semaphore contains several operation primitives:
Createsemaphore () to create a semaphore
Opensemaphore () opens a semaphore
Releasesemaphore () Release semaphores
Waitforsingleobject () waiting for semaphores
// Semaphore handle
Handle global_semephore;
// Share resources
Char global_array [256];
Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}
// Thread 1
Uint global_threadone (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Wait for the request to share the resource to be processed by equal to P
Waitforsingleobject (global_semephore, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = O;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Release shared resources to perform operations equal to V
Releasesemaphore (global_semephore, 1, null );
Return 0;
}
Uint global_threadtwo (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Waitforsingleobject (global_semephore, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = T;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Releasesemaphore (global_semephore, 1, null );
Return 0;
}
Uint global_threadthree (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Waitforsingleobject (global_semephore, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = h;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Releasesemaphore (global_semephore, 1, null );
Return 0;
}
Void csemaphoredlg: onbnclickedbuttonone ()
{
// Set semaphores 1 Resource 1 and only one thread can be accessed at the same time
Global_semephore = createsemaphore (null, 1, 1, null );
This-> startthread ();
// Todo: add your control notification handler code here
}
Void csemaphoredlg: onbnclickedbuttontwo ()
{
// Set two semaphores. 2. Only two threads can access the semaphores.
Global_semephore = createsemaphore (null, 2, 2, null );
This-> startthread ();
// Todo: add your control notification handler code here
}
Void csemaphoredlg: onbnclickedbuttonthree ()
{
// Set three semaphores. 3. Only three threads can access the semaphores.
Global_semephore = createsemaphore (null, 3, 3, null );
This-> startthread ();
// Todo: add your control notification handler code here
}
The usage of semaphores makes it more suitable for synchronization of threads in socket (socket) programs. For example, if the HTTP server on the network needs to limit the number of users who access the same page at the same time, you can set a thread for each user to request the page on the server, the page is the shared resource to be protected. By using semaphores to synchronize threads, users can access a page no matter how many users at any time, only threads with the maximum number of users can access this page, while other access attempts are suspended. This page can only be accessed after a user exits.
Program running result
Event)
Event objects can also be synchronized by means of notification operations. In addition, threads in different processes can be synchronized.
The semaphore contains several operation primitives:
Createevent () to create a semaphore
Openevent () opens an event
Setevent () reset event
Waitforsingleobject () waits for an event
Waitformultipleobjects ()
Wait for multiple events
Waitformultipleobjects function prototype:
Waitformultipleobjects (
In DWORD ncount, // number of pending handles
In const handle * lphandles, // point to the handle Array
In bool bwaitall, // indicates whether to wait completely
In DWORD dwmilliseconds // wait time
)
The ncount parameter specifies the number of kernel objects to wait for. The array of these kernel objects is pointed by lphandles. Fwaitall specifies the two waiting methods for the specified ncount kernel object. If it is true, the function returns only when all objects are notified, if this parameter is set to false, only one of them is notified. Dwmilliseconds serves exactly the same purpose as waitforsingleobject. If the wait times out, the function returns wait_timeout.
// Event Array
Handle global_events [2];
// Share resources
Char global_array [256];
Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}
Uint global_threadone (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
For (INT I = 0; I <256; I ++)
{
Global_array [I] = O;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Reset event
Setevent (global_events [0]);
Return 0;
}
Uint global_threadtwo (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
For (INT I = 0; I <256; I ++)
{
Global_array [I] = T;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Reset event
Setevent (global_events [1]);
Return 0;
}
Uint global_threadthree (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Wait until both events are reset.
Waitformultipleobjects (2, global_events, true, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = h;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Return 0;
}
Void ceventdlg: onbnclickedbuttonstart ()
{
For (INT I = 0; I <2; I ++)
{
// Instantiate the event
Global_events [I] = createevent (null, false, false, null );
}
Cwinthread * ptrone = afxbeginthread (global_threadone,
& M_one,
Thread_priority_normal,
0,
Create_suincluded );
Ptrone-> resumethread ();
// Start the second thread
Cwinthread * ptrtwo = afxbeginthread (global_threadtwo,
& M_two,
Thread_priority_normal,
0,
Create_suincluded );
Ptrtwo-> resumethread ();
// Start the third thread
Cwinthread * ptrthree = afxbeginthread (global_threadthree,
& M_three,
Thread_priority_normal,
0,
Create_suincluded );
Ptrthree-> resumethread ();
// Todo: add your control notification handler code here
}
Events can be used to synchronize threads in different processes, and multiple threads can be conveniently prioritized and waited for. For example, multiple waitforsingleobjects can be written to replace waitformultipleobjects to make programming more flexible.
Program running result
Summary:
1. The mutex function is very similar to that of the critical zone, but the mutex can be named, that is, it can be used across processes. Therefore, creating mutex requires more resources. Therefore, if you only use it within a process, using the critical section will bring speed advantages and reduce resource occupation. Because the mutex is a cross-process mutex, once created, it can be opened by name.
2. mutex, semaphore, and event can all be used by a process to synchronize data. Other objects have nothing to do with data synchronization, but for the process and thread, if the process and thread are in the running status, there is no signal, and there is a signal after exiting. Therefore, you can use waitforsingleobject to wait for the process and thread to exit.
3. you can specify how resources are exclusive by means of mutex. However, if the following problem occurs, the resource cannot be processed by means of mutex, for example, if a user buys a database system with three concurrent access licenses, the user can decide how many threads/processes can perform database operations at the same time based on the number of access licenses purchased by the user, at this time, if the mutex is used, there is no way to complete this requirement. The traffic signal object can be said to be a resource counter.
This article from the csdn blog, reproduced please indicate the source: http://blog.csdn.net/boy8239/archive/2007/11/16/1888054.aspx