Understanding of process thread synchronization mutex

Source: Internet
Author: User

Currently, the popular process thread synchronization and mutex control mechanism is actually implemented by the four most primitive and basic methods.

The combination of these four methods enables flexible and variable. NET and Java thread process control methods.

.
The four methods are defined as follows in the operating system tutorial ISBN 7-5053-6193-7.

Find more detailed explanations
1. critical section: accesses public resources or code segments through multi-thread serialization, which is fast and suitable for control.

Data access.
2 mutex: designed to coordinate separate access to a shared resource.
3 semaphores: designed to control a limited number of user resources.
Four things: used to notify the thread that some events have occurred and start the subsequent task.

Critical Section)
A convenient way to ensure that only one thread can access data at a certain time point. Only one line is allowed at any time

To access the shared resources. If multiple threads attempt to access the critical zone at the same time

All other threads attempting to access this critical section will be suspended and will continue until the threads entering the critical section exit.

After the critical section is released, other threads can continue to seize it and use the atomic method to share resources.

.
The critical section contains two operation primitives:
Entercriticalsection () enters the critical section
Leavecriticalsection () leaves the critical section
After the entercriticalsection () Statement is executed, the Code enters the critical section.

Make sure that all matching leavecriticalsection () can be executed. Otherwise, the critical section protection is shared.

Resources will never be released. Although the synchronization speed in the critical section is very fast, it can only be used to synchronize the threads in the process.

And cannot be used to synchronize threads in multiple processes.
MFC provides many functions and complete classes. I use MFC to implement the critical section. MFC provides a critical section

Ccriticalsection class. It is very easy to use this class for thread synchronization. Only need to be used in thread Functions

The member functions lock () and unlock () of the ccriticalsection class can be used to calibrate protected code segments. Lock

() The resources used by the Code are automatically considered to be protected in the critical section. Unlock before other threads can access

These resources.

// Criticalsection
Ccriticalsection global_criticalsection;

// Share resources
Char global_array [256];

// Initialize shared resources
Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}

// Write thread
Uint global_threadwrite (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Enter the critical section
Global_criticalsection.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = W;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Exit the critical section
Global_criticalsection.unlock ();
Return 0;
}

// Delete a thread
Uint global_threaddelete (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Enter the critical section
Global_criticalsection.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = D;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Exit the critical section
Global_criticalsection.unlock ();
Return 0;
}

// Create a thread and start the thread
Void ccriticalsectionsdlg: onbnclickedbuttonlock ()
{
// Start the first thread
Cwinthread * ptrwrite = afxbeginthread (global_threadwrite,
& M_write,
Thread_priority_normal,
0,
Create_suincluded );
Ptrwrite-> resumethread ();

// Start the second thread
Cwinthread * ptrdelete = afxbeginthread (global_threaddelete,
& M_delete,
Thread_priority_normal,
0,
Create_suincluded );
Ptrdelete-> resumethread ();
}

In the test program, the Lock unlock buttons are implemented separately to ensure resource sharing in the critical section.

The row status, and the execution status of shared resources are not protected in the critical section.
Program running result

Mutex)

The mutex is similar to that in the critical section. Only threads with mutex objects have the permission to access resources.

There is only one mutex object, so it determines that the shared resource will not be accessed by multiple threads at the same time under any circumstances.

. The thread occupying the resource should hand over the mutex object after the task is processed, so that other threads can obtain

And then access the resource. Mutex is more complex than that in the critical section. Because the use of mutex can not only be different in the same application

Resources can be securely shared in threads, and resources can be securely shared between threads of different applications.

.

The mutex contains several operation primitives:
Createmutex () creates a mutex
Openmutex () opens a mutex
Releasemutex () releases mutex
Waitformultipleobjects () waits for the mutex object

Similarly, MFC provides a cmutex class for mutex. Using the cmutex class to implement mutex operations is very simple

But pay special attention to calling the cmutex constructor.
Cmutex (bool binitiallyown = false, lpctstr lpszname = NULL,

Lpsecurity_attributes lpsaattribute = NULL)
You cannot enter unnecessary parameters. If you enter unnecessary parameters, unexpected running results may occur.

// Create mutex
Cmutex global_mutex (0, 0 );

// Share resources
Char global_array [256];

Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}
Uint global_threadwrite (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Global_mutex.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = W;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Global_mutex.unlock ();
Return 0;
}

Uint global_threaddelete (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Global_mutex.lock ();
For (INT I = 0; I <256; I ++)
{
Global_array [I] = D;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Global_mutex.unlock ();
Return 0;
}
Similarly, in the test program, the Lock unlock buttons are implemented separately to protect shared resources with mutex.

And no mutex to protect the execution status of shared resources.
Program running result


Semaphores)
The method for synchronizing a semaphore object to a thread is different from the previous methods. The signal allows multiple threads to use it at the same time.

Share resources, which is the same as PV operations in the operating system. It indicates the maximum number of threads simultaneously accessing shared resources.

. It allows multiple threads to access the same resource at the same time, but needs to restrict the most

Large number of threads. When using createsemaphore () to create a semaphore, you must specify the maximum allowed Resource Count.

And the number of currently available resources. Generally, the current available resource count is set to the maximum resource count.

When a thread accesses shared resources, the current available resource count is reduced by 1, as long as the current available resource count is greater than 0

To send a semaphore signal. However, when the current available count is reduced to 0, it indicates the number of threads currently occupying resources.

The maximum number allowed is reached, and other threads cannot enter. At this time, the semaphore signal cannot

Issued. After processing shared resources, the thread should use the releasesemaphore () function

Increase the number of available resources by 1. The current available resource count cannot exceed the maximum resource count at any time.
PV operations and semaphores are all proposed by Dutch scientist E. W. Dijkstra. Semaphore s is an integer

, S is greater than or equal to the number of resource entities that tables can be used by concurrent processes. However, if S is less than zero, it indicates that the tables are waiting

The number of processes that share resources.
P operation resource application:
(1) s minus 1;
(2) If s minus 1 is still greater than or equal to zero, the process continues to run;
(3) If s minus 1 is less than zero, the process is blocked and enters the queue corresponding to the signal.

Transfer Process scheduling.
V operation to release resources:
(1) s plus 1;
(2) If the sum result is greater than zero, the process continues to execute;
(3) If the sum result is less than or equal to zero, wake up a waiting process from the waiting queue of the signal.

Then return to the original process for further execution or transfer to process scheduling.

The semaphore contains several operation primitives:
Createsemaphore () to create a semaphore
Opensemaphore () opens a semaphore
Releasesemaphore () Release semaphores
Waitforsingleobject () waiting for semaphores

// Semaphore handle
Handle global_semephore;

// Share resources
Char global_array [256];
Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}
// Thread 1
Uint global_threadone (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Wait for the request to share the resource to be processed by equal to P
Waitforsingleobject (global_semephore, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = O;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Release shared resources to perform operations equal to V
Releasesemaphore (global_semephore, 1, null );
Return 0;
}

Uint global_threadtwo (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Waitforsingleobject (global_semephore, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = T;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Releasesemaphore (global_semephore, 1, null );
Return 0;
}

Uint global_threadthree (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
Waitforsingleobject (global_semephore, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = h;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Releasesemaphore (global_semephore, 1, null );
Return 0;
}

Void csemaphoredlg: onbnclickedbuttonone ()
{
// Set semaphores 1 Resource 1 and only one thread can be accessed at the same time
Global_semephore = createsemaphore (null, 1, 1, null );
This-> startthread ();
// Todo: add your control notification handler code here
}

Void csemaphoredlg: onbnclickedbuttontwo ()
{
// Set two semaphores. 2. Only two threads can access the semaphores.
Global_semephore = createsemaphore (null, 2, 2, null );
This-> startthread ();
// Todo: add your control notification handler code here
}

Void csemaphoredlg: onbnclickedbuttonthree ()
{
// Set three semaphores. 3. Only three threads can access the semaphores.
Global_semephore = createsemaphore (null, 3, 3, null );
This-> startthread ();
// Todo: add your control notification handler code here
}
The usage of semaphores makes it more suitable for synchronization of threads in socket (socket) programs. For example,

The HTTP server on the network needs to limit the number of users who access the same page at the same time.

Users set a thread for the page request of the server, and the page is the shared resource to be protected.

The synchronous effect of the number on the thread ensures that no matter how many users access a page at any time, only

The number of threads not greater than the set maximum number of users can be accessed, while other access attempts are suspended.

Users can only access this page after they exit.
Program running result


Event)

Event objects can also be synchronized by means of notification operations. Different processes can be implemented.

.
The semaphore contains several operation primitives:
Createevent () to create a semaphore
Openevent () opens an event
Setevent () reset event
Waitforsingleobject () waits for an event
Waitformultipleobjects () waits for multiple events
Waitformultipleobjects function prototype:
Waitformultipleobjects (
In DWORD ncount, // number of pending handles
In const handle * lphandles, // point to the handle Array
In bool bwaitall, // indicates whether to wait completely
In DWORD dwmilliseconds // wait time
)
The ncount parameter specifies the number of kernel objects to wait.

Lphandles. Fwaitall specifies the two waiting methods for the specified ncount Kernel Object

If it is true, the function returns only when all objects are notified.

You can get the result. The role of dwmilliseconds here and in waitforsingleobject ()

Is completely consistent. If the wait times out, the function returns wait_timeout.

// Event Array
Handle global_events [2];

// Share resources
Char global_array [256];

Void initializearray ()
{
For (INT I = 0; I <256; I ++)
{
Global_array [I] = I;
}
}

Uint global_threadone (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
For (INT I = 0; I <256; I ++)
{
Global_array [I] = O;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Reset event
Setevent (global_events [0]);
Return 0;
}

Uint global_threadtwo (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
For (INT I = 0; I <256; I ++)
{
Global_array [I] = T;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
// Reset event
Setevent (global_events [1]);
Return 0;
}

Uint global_threadthree (lpvoid pparam)
{
Cedit * PTR = (cedit *) pparam;
PTR-> setwindowtext ("");
// Wait until both events are reset.
Waitformultipleobjects (2, global_events, true, infinite );
For (INT I = 0; I <256; I ++)
{
Global_array [I] = h;
PTR-> setwindowtext (global_array );
Sleep (10 );
}
Return 0;
}
Void ceventdlg: onbnclickedbuttonstart ()
{
For (INT I = 0; I <2; I ++)
{
// Instantiate the event
Global_events [I] = createevent (null, false, false, null );
}
Cwinthread * ptrone = afxbeginthread (global_threadone,
& M_one,
Thread_priority_normal,
0,
Create_suincluded );
Ptrone-> resumethread ();

// Start the second thread
Cwinthread * ptrtwo = afxbeginthread (global_threadtwo,
& M_two,
Thread_priority_normal,
0,
Create_suincluded );
Ptrtwo-> resumethread ();

// Start the third thread
Cwinthread * ptrthree = afxbeginthread (global_threadthree,
& M_three,
Thread_priority_normal,
0,
Create_suincluded );
Ptrthree-> resumethread ();
// Todo: add your control notification handler code here
}
Events can be synchronized between threads in different processes, and the priority of multiple threads can be easily achieved.

For example, write multiple waitforsingleobject to replace waitformultipleobjects and

Make programming more flexible.
Program running result

Summary:
1. The mutex function is very similar to that of the critical section, but the mutex can be named, that is, it can

Used across processes. Therefore, creating mutex requires more resources.

Using the critical section can bring speed advantages and reduce resource usage. Because mutex is a cross-process mutex

Once created, you can open it by name.
2. mutex, semaphore, and event can all be crossed.

Other objects have nothing to do with the data synchronization operation, but for the process and thread

Note: If the process and thread are in the running state, they are in the non-signal State and there is a signal state after exiting. So you can

Waitforsingleobject is used to wait for the process and thread to exit.
3. You can specify how resources are exclusive through mutex.

A user buys a database system with three concurrent access permits.

The number of threads/processes that can perform database operations at the same time is determined based on the number of access licenses purchased by the user.

If the mutex is used, there is no way to fulfill this requirement. The traffic signal object can be said to be a resource counter.
Question:
In Linux, there are two types of semaphores. The first type is defined by semget/semop/semctl API.

Number of svr4 (System V Release 4) versions. The second category is composed

The POSIX interface defined by sem_init/sem_wait/sem_post/interfaces. They have the same functions

But the interfaces are different. In the 2.4.x kernel, the semaphore data structure is defined as (include/ASM/semaphore. h)

.
However, in Linux, there is no specific reference to mutex, but we can see that mutex is a special situation of semaphores.

When the maximum number of resources in the semaphore = 1 and the number of threads that can access Shared resources = 1, it is the mutex. Critical

The definition of the partition is also vague. No information was found for operations that use event processing threads/processes to synchronize mutex operations. In

In Linux, GCC/g ++ is used to compile the Standard C ++ code. semaphore operations are almost the same as vc7 programming in windows,

Migration is successful without changing the number of records. However, Linux migration in the mutex, event, and critical section fails.

All the case programs in this article are compiled in Windows XP SP2 + vc7 through
You do not agree with some of your views and opinions in this article:
Http://www.zhangsichu.com/blogview.asp? Content_id = 14 # postcomment to be honest

Channel, please indicate the conversion from the Cool Network Power (www. aspcool. com ).
 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.