Mutex, Semaphore, spinlock

Source: Internet
Author: User
Tags semaphore

A mutex is a key that a person takes to enter a room and gives the key to the first one in the queue when it comes out. The general usage is to serialize access to the critical section code to ensure that the code is not run in parallel.

Semaphore is a room that can accommodate n people, if people are dissatisfied can go in, if people full, it is necessary to wait for someone to come out. In the case of N=1, it is called binary semaphore. The general usage is to restrict simultaneous access to a resource.

Binary Semaphore and Mutex differences:
There is no difference between binary semaphore and mutexes in some systems. On some systems, the main difference is that the mutex must be freed by the process of acquiring the lock. And semaphore can be released by other processes (at this point the semaphore is actually an atomic variable, you can add or subtract), so semaphore can be used for inter-process synchronization. The semaphore synchronization feature is supported by all systems, and it is not possible for a mutex to be released by another process, so it is recommended that the mutex be used only to protect critical section. The semaphore is used to protect a variable, or to synchronize.

 Public classmutextask{/// <summary>    ///get a semaphore/// </summary>    /// <param name= "Lockname" >name</param>    /// <param name= "Timeoutmill" >time-out (milliseconds)</param>    /// <returns>Results</returns>     Public Static BOOLLockup (stringLockname,intTimeoutmill)        {Mutex mutex; if(! Mutex.tryopenexisting (Lockname, outmutex)) {Mutex=NewMutex (true, Lockname); }        return!mutex.    WaitOne (Timeoutmill); }    /// <summary>    ///release Semaphore/// </summary>    /// <param name= "Lockname" >name</param>     Public Static voidUnLock (stringlockname)        {Mutex mutex; if(Mutex.tryopenexisting (Lockname, outmutex)) {Mutex.}        ReleaseMutex (); }    }}

Another concept is spin lock, which is a kernel-state concept. The main difference between spin lock and semaphore is that spin lock is busy waiting and semaphore is sleep. For a process that can sleep, busy waiting of course has no meaning. For a single-CPU system, busy waiting certainly makes no sense (no CPU can release the lock). Therefore, spin lock is used only if the kernel state of the CPU is non-process space. Linux kernel spin lock in non-SMP cases, only the IRQ, no other operation, to ensure that the operation of the program will not be interrupted. In fact, it is similar to the role of mutexes, serialization of access to the critical section. However, the mutex does not protect the interrupt, nor can it be called in the interrupt handler. And spin lock is generally not necessary for the process space that can sleep.

Kernel synchronization measures

To avoid concurrency, prevent competition. The kernel provides a set of synchronization methods to provide protection for shared data. Our focus is not on the detailed usage of these methods, but rather on why these methods are used and the differences between them.
The synchronization mechanism used by Linux can be said to have been evolving from 2.0 to 2.6. From the initial atomic operation, to the subsequent semaphore, from the large kernel lock to today's spin lock. The development of these synchronization mechanisms accompanies Linux from a single processor to a symmetric multiprocessor, and is accompanied by an excessive number of non-preemption cores to the preemption kernel. Locking mechanisms are becoming more effective and more complex.
For now, atomic operations in the kernel are mostly used for counting, and the other things most commonly used are two kinds of locks and their variants: one is spin lock and the other is semaphore. Let's highlight the two locking mechanisms below.

Spin lock

Spin lock is a kind of lock introduced to prevent multi-processor concurrency, it is widely used in the kernel of interrupt processing and other parts (for a single processor, the prevention of concurrency in interrupt processing can be easily closed interrupt mode, do not need to spin lock).
A spin lock can only be held by a single kernel task, and if a kernel task attempts to request a spin lock that has already been contended (it has been held), then the task will always be busy-rotating-waiting for the lock to be re-usable. If the lock is not contended, the kernel task requesting it can get it immediately and continue. A spin lock can prevent more than one kernel task from entering the critical section at any time, so this lock effectively avoids competing for shared resources on multi-processor kernel tasks that run concurrently.
In fact, the spin lock is designed to be a lightweight lock in a short period of time. A competing spin lock causes the thread that requested it to spin (particularly wasting processor time) during the time the lock is re-usable, so the spin lock should not be held for too long. If it takes a long time to lock, it is best to use the semaphore.
The basic form of spin lock is as follows:
Spin_lock (&mr_lock);
Critical section
Spin_unlock (&mr_lock);

Since spin locks can only be held at most one kernel task at a time, only one thread at a time is allowed to exist in the critical section. This is a good way to satisfy the locking service required for symmetric multi-processing machines. On a single processor, a spin lock is simply a switch that sets the kernel preemption. If the kernel preemption does not exist, then the spin lock is completely stripped out of the kernel at compile time.
Simply put, the spin lock is primarily used in the kernel to prevent concurrent access to critical sections in multiple processors, preventing the competition from kernel preemption. In addition, the spin lock does not allow task sleep (the task that holds the spin lock causes a self-deadlock-because sleep can cause the kernel task that holds the lock to be re-dispatched, and then reapply for the lock it has held), it can be used in the context of the interrupt.
Deadlock: Suppose you have one or more kernel tasks and one or more resources, each of which is waiting for one of the resources, but all of the resources are already occupied. This occurs when all kernel tasks are waiting for each other, but they never release the resources already in place, so no kernel task can get the resources needed to continue running, which means the deadlock has occurred. Self-death is to say that they have a certain resources, and then apply for their own resources, it is obviously impossible to obtain the resources, therefore, self-binding hands and feet.

Signal Volume
The semaphore in Linux is a sleep lock. If there is a task trying to get a semaphore that has been held, the semaphore pushes it into the waiting queue and then lets it sleep. The processor then gets free to execute other code. When the semaphore-holding process releases the semaphore, a task in the waiting queue will be awakened, thereby obtaining the semaphore.
The sleep characteristics of the semaphore, which makes the semaphore suitable for long-time hold of the lock, can only be used in the context of the process, because the interrupt context is not scheduled, and when the code holds the semaphore, it can no longer hold the spin lock.

The basic use of semaphores is:
Static Declare_mutex (Mr_sem);//declaring mutually exclusive semaphores
if (down_interruptible (&mr_sem))
Sleep can be interrupted when the signal comes, the task of sleep is awakened
Critical section
Up (&mr_sem);

Signal volume and Spin lock difference
Although it sounds that the use of the two conditions is complex, in practice, the signal volume and spin lock is not easy to confuse. Note the following principles:
If the code needs to sleep-this is often the case when synchronizing with user space-the use of semaphores is the only option. Because it is not limited by sleep, the use of semaphores is generally simpler. If you need to choose between spin locks and semaphores, it depends on the length of time the lock is held. Ideally, all locks should be held as short as possible, but if the lock is held for a longer period of time, the use of the semaphore is a better choice. In addition, the semaphore differs from the spin lock, which does not turn off kernel preemption, so the code holding the semaphore can be preempted. This means that the semaphore does not negatively affect the scheduling response time.

Spin lock to signal volume

Lock-in method for demand recommendation

Low overhead lock priority using spin lock
Short-term lock priority with spin lock
Long-term lock priority use of semaphores
Lock using spin lock in interrupt context
Holding a lock is a need to sleep, dispatch the use of the semaphore


Critical area (Critical section)

An easy way to ensure that only one thread can access the data at a time. Only one thread is allowed to access the shared resource at any time. If more than one thread attempts to access the critical section at the same time, all other threads that attempt to access this critical section after one thread has entered will be suspended and continue until the thread entering the critical section leaves. When the critical section is released, other threads can continue to preempt, and in this way achieve the purpose of atomic manipulation of shared resources.

When using a critical section, it is generally not allowed to run too long, as long as the thread entering the critical section has not left, all other threads attempting to enter this critical section will be suspended to enter the waiting state, and will be affected to some extent. The running performance of the program. In particular, it is important not to include operations that wait for user input or some other external intervention into the critical section. If you enter a critical section and you have not released it, it will also cause other threads to wait for a long time. Although critical section synchronization is fast, it can only be used to synchronize threads within this process and not to synchronize threads in multiple processes.

Mutex (mutex)

Mutex (mutex) is a very versatile kernel object. The ability to guarantee mutually exclusive access to the same shared resource by multiple threads. Similar to the critical section, only the line friend with the mutex has permission to access the resource, because there is only one mutex object, so it is determined that the shared resource will not be accessed by multiple threads at the same time in any case. The thread that currently occupies the resource should hand over the owning mutex after the task has been processed so that other threads can access the resource after it is acquired. Unlike several other kernel objects, mutexes have special code in the operating system and are managed by the operating system, and the operating system even allows it to perform unconventional operations that other kernel objects cannot. The mutex is similar to the critical section, and only the line friend with the mutex has permission to access the resource, because there is only one mutex object, so it is determined that the shared resource will not be accessed by multiple threads at the same time in any case. The thread that currently occupies the resource should hand over the owning mutex after the task has been processed so that other threads can access the resource after it is acquired. The mutex is more complex than the critical section. Because using mutexes not only enables the secure sharing of resources in different threads of the same application, but also enables secure sharing of resources between threads of different applications.

Signal Volume (semaphores)

Semaphore objects synchronize threads differently than in the previous methods, and the signal allows multiple threads to use shared resources at the same time as the PV operation in the operating system. It indicates the maximum number of threads concurrently accessing the shared resource. It allows multiple threads to access the same resource at the same time, but needs to limit the maximum number of threads that access this resource at the same time. When you create a semaphore with CreateSemaphore (), you indicate both the maximum allowable resource count and the current available resource count. In general, the current available resource count is set to the maximum resource count, and each additional thread accesses the shared resource, the current available resource count is reduced by 1, and the semaphore signal can be emitted as long as the current available resource count is greater than 0. However, the current available count is reduced to 0 o'clock indicating that the number of threads currently occupying the resource has reached the maximum allowable number, and the semaphore signal will not be able to be emitted when other threads are allowed to enter. After the thread has finished processing the shared resource, the current available resource count should be added by 1 at the same time as the ReleaseSemaphore () function is left. At any time the currently available resource count is never greater than the maximum resource count. The semaphore is controlled by the count to the thread access resource, and indeed the semaphore is actually called the Dijkstra counter.

The concept of PV operation and signal volume is presented by Dutch scientist E.w.dijkstra. The semaphore S is an integer, s greater than or equals zero represents the number of resource entities available to the concurrent process, but s less than zero indicates the number of processes waiting to use the shared resource.

P Operation Request Resources:
(1) s minus 1;
(2) If s minus 1 is still greater than or equal to zero, then the process continues to execute;
(3) If s minus 1 is less than 0, then the process is blocked into the queue corresponding to the signal, and then transferred to the process scheduling.
V Operations Release Resources:
(1) s plus 1;
(2) If the sum result is greater than 0, the process will continue to execute;
(3) If the sum result is less than or equal to zero, a wait process is awakened from the waiting queue of the signal and then returned to the original process to continue or transfer to the process schedule.

The use of semaphores makes it more suitable for synchronizing the threads of a socket (socket) program. For example, an HTTP server on a network that restricts the number of users accessing the same page at the same time can set a thread for a page request to a server without a user, and the page is a shared resource to be protected. By using semaphores to synchronize threads, you can ensure that no matter how many users access a page at any one time, only a thread that is not larger than the set maximum number of users is able to access it, while the other attempts to access it are suspended and can only be entered after a user exits access to the page.


1. The mutex is very similar to the critical section, but the mutex can be named, which means it can be used across processes. So creating mutexes requires more resources, so using a critical section just to be used within a process can bring speed advantages and reduce resource usage. Because the mutex is a cross-process mutex once created, it can be opened by name.

2. Mutex (mutex), Semaphore (Semaphore) can be used across processes to synchronize data operations, while other objects are not related to data synchronization operations, but for processes and threads, if the process and thread in the running state is no signal state, after exiting a signaled state.

3. The mutex can be used to specify that the resource is exclusive, but if one of the following cases can not be handled by mutual exclusion, for example, now a user buys a three concurrent Access License database system, depending on the number of access licenses purchased by the user to determine how many threads/processes can simultaneously perform database operations, At this time, if the use of mutual exclusion is no way to complete this requirement, beacon object can be said to be a resource counter

Mutex, Semaphore, spinlock

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.