Four kinds of control methods for synchronous mutual exclusion of process or thread

Source: Internet
Author: User
Tags array execution function prototype mutex net sleep socket thread
Process | control

I would like to tidy up my understanding of the process thread synchronization mutex. In the week of Children's Day a just returned to school students to dinner. In the process of eating, there are two students, for a question of the red. One thinks. NET, the process threading control model is more reasonable. A thread pool strategy in Java is considered better than. Net. Let's talk about it. Go to the process thread synchronization mutex control problem. When I got home, I thought about it and wrote this stuff.

Now the popular process thread synchronization mutex control mechanism, in fact, is by the most primitive and basic 4 methods to achieve. The combination of these 4 methods is optimized. NET and Java under the flexible, programming simple thread process control means.

These 4 methods are specifically defined below in the "Operating system Tutorial" ISBN 7-5053-6193-7 can be found in a more detailed explanation

1 Critical Area:Through the serialization of multithreading to access public resources or a section of code, fast, suitable for controlling data access.

   2 mutexes:Designed to coordinate separate access to a shared resource.

   3 Signal Volume:Designed to control a limited number of user resources.

4 Events:
Used to inform the thread that some events have occurred, starting the successor task.

Critical Area (Critical section)

An easy way to ensure that only one thread can access the data at a certain point in time. Only one thread is allowed to access the shared resource at any time. If more than one thread attempts to access the critical section at the same time, all other threads that attempt to access this critical area will be suspended after one thread enters, and the thread that continues into the critical section leaves. After the critical section is freed, other threads can continue to preempt and thus achieve the purpose of operating the shared resource in an atomic manner.

The critical section contains two operational primitives: EnterCriticalSection () enters the critical area leavecriticalsection () leaving the critical zone

After the EnterCriticalSection () statement executes, the code will enter the critical section, and no matter what happens, you must ensure that the matching leavecriticalsection () can be executed. Otherwise, the shared resources protected by the critical section will never be freed. Although critical areas are synchronized quickly, they can only be used to synchronize threads within this process, not to synchronize threads in multiple processes.

MFC provides a number of full-featured classes, I used MFC implementation of the critical section. MFC provides a CCriticalSection class for critical areas, and it is easy to use this class for thread synchronization. Only the CCriticalSection class member function lock () and unlock () are used to calibrate the protected code fragment in the thread function. The resources used by the code after Lock () are automatically considered to be protected within the critical zone. These resources can be accessed by other threads after unlock.

CriticalSection
CCriticalSection global_criticalsection;

Shared resources
Char global_array[256];

Initializing shared resources
void InitializeArray ()
{
for (int i = 0;i<256;i++)
{
Global_array[i]=i;
}
}

Write thread
UINT global_threadwrite (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");
Enter the critical section
Global_criticalsection.lock ();
for (int i = 0;i<256;i++)
{
Global_array[i]=w;
Ptr->setwindowtext (Global_array);
Sleep (10);
}

Leave the critical section
Global_criticalsection.unlock ();
return 0;
}

deleting threads
UINT Global_threaddelete (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");
Enter the critical section
Global_criticalsection.lock ();
for (int i = 0;i<256;i++)
{
Global_array[i]=d;
Ptr->setwindowtext (Global_array);
Sleep (10);
}

Leave the critical section
Global_criticalsection.unlock ();
return 0;
}

To create a thread and start a thread
void Ccriticalsectionsdlg::onbnclickedbuttonlock ()
{
Start the I Thread
CWinThread *ptrwrite = AfxBeginThread (Global_threadwrite,
&m_write,
Thread_priority_normal,
0,
create_suspended);
Ptrwrite->resumethread ();

Start the second Thread
CWinThread *ptrdelete = AfxBeginThread (Global_threaddelete,
&m_delete,
Thread_priority_normal,
0,
create_suspended);
Ptrdelete->resumethread ();
}

In the test program, the Lock unlock two buttons are implemented separately, protecting the execution state of the shared resource in a critical section, and no critical section protecting the execution state of the shared resource.

Program Run Results

  Mutex (mutex)

The mutex is very similar to the critical section, and only the line that owns the mutex Cheng has access to the resource, and because the mutex has only one, it determines that the shared resource will not be accessed by multiple threads at the same time. The thread that currently occupies the resource should hand over the mutex that it owns after the task has been processed so that other threads can access the resource after it is acquired. The mutex is more complex than the critical region. Because mutual exclusion not only enables the secure sharing of resources within different threads of the same application, but also enables secure sharing of resources between threads of different applications.

Several operations primitives contained in the mutex:
CreateMutex () Create a mutex
OpenMutex () Open a mutex
ReleaseMutex () Free Mutex
WaitForMultipleObjects () Wait for mutex object

Also MFC provides a CMutex class for mutexes. Using the CMutex class to implement mutex operations is very simple, but pay special attention to calls to CMutex constructors

CMutex (BOOL binitiallyown = FALSE, LPCTSTR lpszname = null, lpsecurity_attributes Lpsaattribute = null)

Do not use the parameters can not be filled out, random fill will appear some unexpected results of the operation.

Create mutexes
CMutex Global_mutex (0,0,0);

Shared resources
Char global_array[256];

void InitializeArray ()
{
for (int i = 0;i<256;i++)
{
Global_array[i]=i;
}
}
UINT global_threadwrite (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");
Global_mutex.lock ();
for (int i = 0;i<256;i++)
{
Global_array[i]=w;
Ptr->setwindowtext (Global_array);
Sleep (10);
}
Global_mutex.unlock ();
return 0;
}

UINT Global_threaddelete (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");
Global_mutex.lock ();
for (int i = 0;i<256;i++)
{
Global_array[i]=d;
Ptr->setwindowtext (Global_array);
Sleep (10);
}
Global_mutex.unlock ();
return 0;
}

Also in the test program, the Lock unlock two buttons are implemented separately, protecting the execution state of the shared resource with a mutex, and no mutex protecting the execution state of the shared resource.

Program Run Results

   signal Volume (semaphores)

Semaphore objects are synchronized to threads in a way that allows multiple threads to use shared resources at the same time as the PV operation in the operating system, unlike the previous methods. It indicates the maximum number of threads accessing the shared resource at the same time. It allows multiple threads to access the same resource at the same time, but needs to limit the maximum number of threads that access the resource at the same time. When you create a semaphore with CreateSemaphore (), you indicate both the maximum allowable resource count and the current count of available resources. Typically, the current available resource count is set to the maximum resource count, with each additional thread accessing the shared resource, the current available resource count is reduced by 1, and semaphore signals can be emitted as long as the current available resource count is greater than 0. However, when the current available count is reduced to 0 o'clock, the number of threads currently consuming resources has reached the maximum allowable number and cannot allow other threads to enter, at which point the semaphore signal will not be emitted. After the thread finishes processing the shared resource, it should pass the ReleaseSemaphore () function to add the current available resource count to 1 while leaving. The current count of available resources is never likely to be greater than the maximum resource count at any time.

The concept of PV operation and semaphore is proposed by E.w.dijkstra, a Dutch scientist. Semaphore S is an integer, s greater than equals zero represents the number of resource entities available to the concurrent process, but s less than zero indicates the number of processes waiting to use the shared resource.

p Operation Request resources:
(1) s minus 1;
(2) If s minus 1 is still greater than or equal to zero, then the process continues to execute;
(3) If s minus 1 is less than 0, the process is blocked into a queue corresponding to the signal and then transferred to the process schedule.
  
   v Operation frees resources:
(1) s plus 1;
(2) If the addition result is greater than 0, the process continues to execute;
(3) If the addition result is less than or equal to zero, then a waiting process is awakened from the waiting queue of the signal, which is then returned to the original process for execution or transfer to the process schedule.

Several operational primitives contained in the semaphore:
CreateSemaphore () Create a semaphore
OpenSemaphore () opens a semaphore
ReleaseSemaphore () releasing semaphore
WaitForSingleObject () waiting signal volume

Semaphore handle
HANDLE Global_semephore;

Shared resources
Char global_array[256];
void InitializeArray ()
{
for (int i = 0;i<256;i++)
{
Global_array[i]=i;
}
}

Thread 1
UINT Global_threadone (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");
Waiting for a shared resource request to be passed equals p action
WaitForSingleObject (Global_semephore, INFINITE);
for (int i = 0;i<256;i++)
{
Global_array[i]=o;
Ptr->setwindowtext (Global_array);
Sleep (10);
}

Freeing a shared resource equals V operation
ReleaseSemaphore (Global_semephore, 1, NULL);
return 0;
}

UINT global_threadtwo (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");
WaitForSingleObject (Global_semephore, INFINITE);
for (int i = 0;i<256;i++)
{
global_array[i]=t;
Ptr->setwindowtext (Global_array);
Sleep (10);
}
ReleaseSemaphore (Global_semephore, 1, NULL);
return 0;
}

UINT Global_threadthree (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");
WaitForSingleObject (Global_semephore, INFINITE);
for (int i = 0;i<256;i++)
{
Global_array[i]=h;
Ptr->setwindowtext (Global_array);
Sleep (10);
}
ReleaseSemaphore (Global_semephore, 1, NULL);
return 0;
}

void Csemaphoredlg::onbnclickedbuttonone ()
{

Set Semaphore 1 resource 1 at the same time only one thread can access
Global_semephore= CreateSemaphore (NULL, 1, 1, NULL);
This->startthread ();

Todo:add your control notification handler code here
}

void Csemaphoredlg::onbnclickedbuttontwo ()
{

Set Semaphore 2 Resource 2 can only have two threads access
Global_semephore= CreateSemaphore (NULL, 2, 2, NULL);
This->startthread ();

Todo:add your control notification handler code here
}

void Csemaphoredlg::onbnclickedbuttonthree ()
{

Set Semaphore 3 Resource 3 can only have three threads access
Global_semephore= CreateSemaphore (NULL, 3, 3, NULL);
This->startthread ();

Todo:add your control notification handler code here
}

The use of semaphores makes it more suitable for synchronizing the threads of socket (socket) programs. For example, if an HTTP server on a network restricts the number of users accessing the same page at the same time, you can set a thread for each user's page request to the server, and the page is a shared resource to be protected. Synchronizing a thread by using semaphores ensures that no matter how many users have access to a page at any one time, only the maximum number of users that are not greater than the set is allowed to access the thread, while other access attempts are suspended and may enter only after a user has exited access to the page.

 Program Run Results

  Events (Event)

Event objects can also maintain synchronization of threads by notification actions. And you can implement thread synchronization operations in different processes.

Several operational primitives contained in the semaphore:
CreateEvent () Create a semaphore
OpenEvent () opens an event
SetEvent () Callback Event
WaitForSingleObject () waiting for an event
WaitForMultipleObjects () Waiting for multiple events

WaitForMultipleObjects function Prototype:
WaitForMultipleObjects (
In DWORD ncount,//wait handle number
In CONST HANDLE *lphandles,//pointing handle array
In BOOL bWaitAll,//Whether completely wait for flag
In DWORD dwmilliseconds//wait Time


The parameter ncount specifies the number of kernel objects to wait, and the array that holds the kernel objects is pointed to by Lphandles. fWaitAll specifies two ways of waiting for the specified Ncount kernel object, which returns when all objects are notified, and false if any one of them is notified. The role of dwmilliseconds here is exactly the same as the role in WaitForSingleObject (). If the wait times out, the function returns to Wait_timeout.

Event array
HANDLE global_events[2];

Shared resources
Char global_array[256];

void InitializeArray ()
{
for (int i = 0;i<256;i++)
{
Global_array[i]=i;
}
}

UINT Global_threadone (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");
for (int i = 0;i<256;i++)
{
Global_array[i]=o;
Ptr->setwindowtext (Global_array);
Sleep (10);
}

Callback Events
SetEvent (Global_events[0]);
return 0;
}

UINT global_threadtwo (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");
for (int i = 0;i<256;i++)
{
global_array[i]=t;
Ptr->setwindowtext (Global_array);
Sleep (10);
}

Callback Events
SetEvent (Global_events[1]);
return 0;
}

UINT Global_threadthree (LPVOID pparam)
{
CEdit *ptr= (CEdit *) Pparam;
Ptr->setwindowtext ("");

Wait for two events to be reset
WaitForMultipleObjects (2, Global_events, True, INFINITE);
for (int i = 0;i<256;i++)
{
Global_array[i]=h;
Ptr->setwindowtext (Global_array);
Sleep (10);
}
return 0;
}
void Ceventdlg::onbnclickedbuttonstart ()
{
for (int i = 0; i < 2; i++)
{

Instantiating events
Global_events[i]=createevent (Null,false,false,null);
}
CWinThread *ptrone = AfxBeginThread (Global_threadone,
&m_one,
Thread_priority_normal,
0,
create_suspended);
Ptrone->resumethread ();

Start the second Thread
CWinThread *ptrtwo = AfxBeginThread (Global_threadtwo,
&m_two,
Thread_priority_normal,
0,
create_suspended);
Ptrtwo->resumethread ();

Start the third Thread
CWinThread *ptrthree = AfxBeginThread (Global_threadthree,
&m_three,
Thread_priority_normal,
0,
create_suspended);
Ptrthree->resumethread ();

Todo:add your control notification handler code here
}

Events can implement thread synchronization operations in different processes, and can easily implement priority comparison wait operations for multiple threads, such as writing multiple WaitForSingleObject to replace waitformultipleobjects and making programming more flexible.

   Program Run Results

  Summary:

 1. a mutex is very similar to a critical section, but a mutex can be named, meaning it can be used across processes. So creating a mutex requires more resources, so using a critical section for just the inside of a process can lead to a speed advantage and reduce the amount of resources taken up. Because a mutex is a mutex that spans a process, once created, it can be opened by name.

 2. mutexes (mutexes), semaphores (semaphore), events (event) can be used across processes to synchronize data operations, while other objects are independent of data synchronization operations, but for processes and threads, if processes and threads are not signaled in the running state, After exiting for signaled status. So you can use WaitForSingleObject to wait for processes and threads to exit.

  3. a mutex can be used to specify that a resource is exclusive, but if one of the following is not possible through a mutex, for example, a user now buys a three concurrent Access License database system that determines how many threads are available based on the number of access licenses purchased by the user The process can be a database operation at the same time, if the use of mutual exclusion is no way to complete this requirement, the semaphore object can be said to be a resource counter.



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.