Synchronization and mutex between multiple processes (threads) in the Linux Kernel

Source: Internet
Author: User
Tags semaphore
One problem that must be solved in the Linux device driver is the concurrent access by multiple processes to shared resources. concurrent access leads to competing states. Linux provides a variety of solutions to competing states, these methods are suitable for different application scenarios.

Linux Kernel is a multi-process and multi-thread operating system. It provides a complete kernel synchronization method. The kernel synchronization method list is as follows:
Interrupt shielding
Atomic operation
Spin lock
Read/write spin lock
Sequential lock
Semaphores
Read/write semaphores
BKL (large kernel lock)
SEQ lock
I. Concurrency and competition:
Definition:
Concurrency indicates that multiple execution units are executed simultaneously and concurrently, while concurrent execution units share resources (such as global variables and static variables in hardware resources and software) race conditions ).
In Linux, the main competition occurs in the following situations:
1. symmetric multi-processor (SMP) multiple CPUs
The feature is that multiple CPUs use the common system bus, so they can access the common peripherals and memory.
2. processes in a single CPU and those that seize it
3. interruption (hard interrupt, Soft Interrupt, tasklet, and bottom half) between processes
As long as multiple concurrent execution units have access to shared resources, the competition may occur.
If the processor is interrupted to access the resources being accessed by the process, the race will also occur.
Multiple interruptions may also cause concurrency and lead to competition (Interruptions are interrupted by interruptions with higher priority ).

The solution to the competition state problem is to ensure mutex access to shared resources. The so-called mutex access means that when an execution unit accesses shared resources, access to other execution units is prohibited.

The Code area used to access Shared resources is called the critical section. The critical section must be protected by a mutex mechanism, interrupt shielding, atomic operations, spin locks, and semaphores are mutually exclusive methods available in Linux device drivers.

Critical section and competitive conditions:
The so-called critical zone (critical regions) is the code segment for accessing and operating shared data. To avoid concurrent access in the critical zone, the programmer must ensure that the code is executed atomically -- that is, code cannot be interrupted before execution ends, just as the entire critical section is an inseparable instruction. If two execution threads may be in the same critical section, the program contains a bug, if this happens, we call it race conditions to avoid concurrency and prevent synchronization.

Deadlock:
A deadlock requires one or more execution threads and one or more resources. Each thread is waiting for one of the resources, but all resources are occupied, and all threads are waiting for each other, but they will never release the occupied resources, so no thread can continue, this means that a deadlock occurs.

Ii. Interruption shielding
One easy way to avoid competing states within a single CPU range is to block system interruptions before entering the critical section.
Because the process scheduling and other operations of the Linux kernel depend on interruptions, the kernel can seize the concurrency between processes.
How to Use interrupt shielding:
Local_irq_disable () // block interrupt
// Critical section
Local_irq_enable () // open interrupt
Features:
In Linux, asynchronous Io, process scheduling, and many other important operations depend on interruptions. During the interruption blocking process, all interruptions cannot be processed. Therefore, shielding for a long time is very dangerous, this may cause data loss or even system crash. This requires that the current kernel execution path should finish the code in the critical section as soon as possible after the interruption is blocked.
Interruption shielding can only prevent interruption within the CPU. Therefore, it cannot solve the competition caused by multiple CPUs. Therefore, using interrupt shielding independently is not a recommended method to avoid competition, it is generally used in combination with the spin lock.

Iii. Atomic operations
Definition: An atomic operation is an operation that is not interrupted by other code paths during execution.
(Atoms originally refer to an inseparable particle, so atomic operations are commands that cannot be split)
(It ensures that commands are executed in an "Atomic" manner and cannot be interrupted)
Atomic operations are inseparable and will not be interrupted by any other tasks or events after execution. In a single processor system (uniprocessor), operations that can be completed in a single command can be considered as "Atomic operations", because interruptions can only occur between commands. This is also why test_and_set, test_and_clear and other commands are introduced in some CPU command systems for the critical resource mutex. However, the symmetric multi-processor structure is different, because the system has multiple processors running independently, operations that can be completed in a single command may be affected. Taking Decl (descending command) as an example, this is a typical "Read-Modify-write" process that involves two memory accesses.
Easy to understand:
Atomic operations, as the name implies, cannot be subdivided like atoms. An operation is an atomic operation, which means that the operation is executed in an atomic way. It should be completed in one breath, and the execution process cannot be interrupted by other operations of the operating system, during its execution, other OS behaviors cannot be inserted.
Classification: the Linux Kernel provides a series of functions to perform atomic operations in the kernel, which can be an integer atomic operation or a bit atomic operation. In common, operations are atomic under any circumstances, kernel code can be safely called without being interrupted.

Atomic Integer Operation:
Atomic operations on integers can only process data of the atomic_t type. Here, a special data type is introduced, but the int type of C language is not directly used, mainly for two reasons:
First, let the atomic function only accept the operands of the atomic_t type. It can ensure that the atomic operation is only used with this special type of data. At the same time, this ensures that data of this type is not transmitted to any other non-atomic function;
Second, use the atomic_t type to ensure that the compiler does not optimize access to the corresponding value-this allows the atomic operation to eventually receive the correct memory address, not an alias, the last step is to use atomic_t to block the differences between atomic operations in different architectures.
The most common use of atomic integer operations is to implement counters.
In addition, it must be noted that an atomic operation can only ensure that the operation is atomic, either completed or not completed. There is no possibility that the operation is half done, but the atomic operation cannot guarantee the operation sequence, that is, it cannot ensure that the two operations are completed in a certain order. To ensure the sequence of atomic operations, use the memory barrier command.
Atomic_t and atomic_init (I) Definitions
Typedef struct {volatile int counter;} atomic_t;
# Define atomic_init (I) {(I )}

When writing code and using atomic operations, try not to use complicated locking mechanisms. For most architectures, atomic operations are compared with more complex Synchronization Methods, the cost to the system is small, and the impact on high-speed cache rows is also small. However, for code with high-performance requirements, we can test and compare multiple Synchronization Methods, it is a wise practice.

Atomic bit operation:
The function that operates on this level of data operates on normal memory addresses. Its parameters are a pointer and a single digit.

For convenience, the kernel also provides a set of non-atomic functions corresponding to the preceding operations. The operations of non-atomic functions and atomic functions are identical. However, the former does not guarantee atomicity, and its name prefix has two more underscores. For example, the non-atomic form corresponding to test_bit () is _ test_bit (). If you do not need atomic operations (for example, if you have used a lock to protect your data ), these non-atomic bitwise functions may execute faster than the atomic bitwise functions.

Iv. spin lock
Introduction of spin locks:
If every critical section can be as simple as adding a variable, it is a pity that this is not the case, but the critical section can span multiple functions. For example, you must first remove data from a data result, convert and parse the data format, and add it to another data structure. The entire execution process must be atomic. Before the data is updated, no other code can read the data. Obviously, simple atomic operations are powerless (in a single processor system (uniprocessor, operations that can be completed in a single command can be considered as "Atomic operations", because the interruption can only happen between commands ), this requires the use of a more complex synchronization method-lock to provide protection.

Introduction to spin locks:
The most common lock in Linux kernel is spin lock. A spin lock can only be held by one executable thread at most. If an execution thread tries to obtain a contention (already held) the thread will always be in a busy loop-Rotating-waiting for the lock to be available again. If the lock is not in contention, the execution thread requesting the lock will immediately get it and continue to execute, at any time, the spin lock can prevent more than one execution thread from entering the comprehension zone at the same time. Note that the same lock can be used in multiple locations-for example, all access to the given data can be protected and synchronized.
A competing spin lock makes the thread requesting it spin when waiting for the lock to re-available (especially wasting processing time), so the spin lock should not be held for a long time. In fact, this is exactly the original intention of using the spin lock. in a short period of time, we can also use another method to deal with lock contention: Let the request thread sleep, wait until the lock is re-available, so that the processor does not have to wait cyclically and can execute other code, which also brings about some overhead-there are two obvious context switches here, the blocked thread needs to be swapped out and in. Therefore, it is better to hold the spin lock for less time than the time required to complete two context switches. Of course, most of us will not be bored with measuring the time consumed for context switching, therefore, we can make the time for holding the spin lock as short as possible. The semaphore can provide the second mechanism, which enables the waiting thread to put into sleep in case of contention, instead of rotating.
Spin locks can be used in interrupt handlers (semaphores cannot be used here because they can cause sleep). Before using spin locks in interrupt handlers, be sure to get the lock, first, disable local interruption (Interrupt requests on the current processor). Otherwise, the interrupt handler interrupts the kernel code that is holding the lock, it is possible to try to use the already held spin lock, so that the interrupt handler will spin and wait for the lock to be available again, however, the lock owner cannot run before the interrupt handler is executed. This is the double request deadlock we mentioned in the previous chapter. Note that only the interruption on the current processor needs to be closed, if the interrupt occurs on different processors, even if the interrupt handler spin on the same lock, it does not prevent the lock owner (on different processors) from eventually releasing the lock.

A simple understanding of spin locks:
The simplest way to understand a spin lock is to treat it as a variable that marks a critical section or as "I am currently running, please wait for a while or mark it as "I am not currently running and can be used ". If the execution unit of a first enters the routine, it will hold the spin lock. When the execution unit of B tries to enter the same routine, it will know that the spin lock has been held, you must wait until Execution Unit A is released.

Spin lock API functions:

In fact, several semaphores and mutex mechanisms are introduced. The underlying source code uses the spin lock, which can be understood as the re-packaging of the spin lock. So here we can understand why spin locks generally provide higher performance than semaphores.
A spin lock is a mutually exclusive device that has two values: "locked" and "Unlocked ". It is usually implemented as a single bit in an integer.
The "test and set" operation must be performed in an atomic manner.
At any time, as long as the kernel code has a spin lock, preemption on the relevant CPU will be disabled.
Core Rules applicable to spin locks:
(1) Any code with a spin lock must make the atom, except for service interruption (in some cases, the CPU cannot be abandoned, such as service interruption, the spin lock must also be obtained. To avoid this lock trap, you must disable interruption when you have a spin lock. You cannot discard the CPU (for example, sleep can occur in many unexpected places ). Otherwise, the CPU may spin forever (dead ).
(2) The shorter the time to possess the spin lock, the better.

It should be emphasized that spin locks are designed to be used for multi-processor synchronization mechanisms. For Single-processor (for single-processor and non-preemptible kernels, spin locks do nothing ), the kernel does not introduce the spin lock mechanism during compilation. For a preemptible kernel, it is only used to set whether the kernel preemption mechanism is enabled, that is to say, locking and unlocking actually turn into disabling or enabling the kernel preemption function. If the kernel does not support preemption, the spin lock will not be compiled into the kernel.
In the kernel, The spinlock_t type is used to represent the spin lock. It is defined in <Linux/spinlock_types.h>:
Typedef struct {
Raw_spinlock_t raw_lock;
# If defined (config_preempt) & defined (config_smp)
Unsigned int break_lock;
# Endif
} Spinlock_t;

For kernels that do not support SMP, struct raw_spinlock_t does not have any. It is an empty structure. For a multi-processor kernel, struct raw_spinlock_t is defined
Typedef struct {
Unsigned int slock;
} Raw_spinlock_t;

Slock indicates the status of the spin lock, "1" indicates that the spin lock is in the unlock status (unlock), and "0" indicates that the spin lock is in the locked status ).
Break_lock indicates whether the current process is waiting for the spin lock. Obviously, it only takes effect on the SMP kernel that supports preemption.
The implementation of the spin lock is a complicated process. It is not complicated because of the amount of code or logic required to implement it. In fact, it has very few implementation code. The implementation of spin locks is closely related to the architecture. The core code is basically written in assembly language, and the core Code related to the architecture is stored in the relevant <ASM/> directory, for example, <ASM/spinlock. h>. For our driver developers, we do not need to understand the internal details of this spinlock. If you are interested in it, please refer to the Linux kernel source code. For the spinlock interface we drive, we only need to include the header file <Linux/spinlock. h>. Before giving a detailed introduction to the spinlock API, let's take a look at a basic usage format of the spin lock:
# Include <Linux/spinlock. h>
Spinlock_t lock = spin_lock_unlocked;

Spin_lock (& lock );
....
Spin_unlock (& lock );

In terms of usage, the APIs of the spinlock are quite simple. Generally, the following APIs are used. In fact, they are all defined in <Linux/spinlock. h>. h> medium
# Include <Linux/spinlock. h>
Spin_lock_unlocked
Define_spinlock
Spin_lock_init (spinlock_t *)
Spin_lock (spinlock_t *)
Spin_unlock (spinlock_t *)
Spin_lock_irq (spinlock_t *)
Spin_unlock_irq (spinlock_t *)
Spin_lock_irqsace (spinlock_t *, unsigned long flags)
Spin_unlock_irqsace (spinlock_t *, unsigned long flags)
Spin_trylock (spinlock_t *)
Spin_is_locked (spinlock_t *)

• Initialization
The spinlock can be initialized in either static or dynamic mode. For static spinlock objects, we use spin_lock_unlocked to initialize. It is a macro. Of course, we can also put the Declaration and initialization of the spinlock together, which is the work of the define_spinlock macro. Therefore, the following two lines of code are equivalent.
Define_spinlock (Lock );
Spinlock_t lock = spin_lock_unlocked;

The spin_lock_init function is generally used to initialize a dynamically created spinlock_t object. Its parameter is a pointer to the spinlock_t object. Of course, it can also initialize a static spinlock_t object without initialization.
Spinlock_t * Lock
......
Spin_lock_init (Lock );

• Lock acquisition
The kernel provides three functions to obtain a spin lock.
Spin_lock: gets the specified spin lock.
Spin_lock_irq: Disable local interruption and obtain the spin lock.
Spin_lock_irqsace: stores the local interrupt status, disables the local interrupt and obtains the spin lock, and returns the local interrupt status.

The spin lock can be used in the interrupt handler. In this case, you need to use a function with the local interrupt function disabled. We recommend that you use the spin_lock_irqsave function because it will save the interrupt mark before the lock, in this way, the interruption mark during unlocking will be restored correctly. If spin_lock_irq is disabled when the lock is applied, an error occurs when it is unlocked.

The other two functions related to getting the same spin lock are:
Spin_trylock (): attempts to obtain the spin lock. If the acquisition fails, a non-0 value is returned immediately; otherwise, 0 is returned.
Spin_is_locked (): determines whether the specified spin lock has been obtained. If yes, non-0 is returned. Otherwise, 0 is returned.
• Release locks
The kernel provides three relative functions to release the spin lock.
Spin_unlock: Release the specified spin lock.
Spin_unlock_irq: Release the spin lock and activate the local interrupt.
Spin_unlock_irqsave: Release the spin lock and restore the saved local interrupt status.

5. read/write spin locks
For example, if the data protected by the critical section is readable and writable, as long as there is no write operation, concurrent read operations can be supported. This requirement only requires that write operations be mutually exclusive. If you still use the spin lock, it is obviously unable to meet this requirement (it is a waste of read operations ). Therefore, the kernel provides another kind of lock-read/write spin lock. read Spin locks are also called shared spin locks, and write spin locks are also called exclusive spin locks.
The read/write spin lock is a lock mechanism with a smaller granularity than the spin lock. It retains the concept of "Spin", but in terms of write operations, there can only be one write process at most, in terms of reading operations, there can be multiple read execution units at the same time. Of course, read and write operations cannot be performed at the same time.
The use of read/write spin locks is similar to that of General spin locks. First, initialize the read/write spin lock object:
// Static Initialization
Rwlock_t rwlock = rw_lock_unlocked;
// Dynamic Initialization
Rwlock_t * rwlock;
...
Rw_lock_init (rwlock );

In the read operation code, read Spin locks are obtained for shared data:
Read_lock (& rwlock );
...
Read_unlock (& rwlock );

Get the write spin lock for the shared data in the write operation code:
Write_lock (& rwlock );
...
Write_unlock (& rwlock );

Note that if there are a large number of write operations, the write operation spin will be in the write spin lock and write hunger (wait until all the read Spin locks are released ), because the read spin lock is free to obtain the read spin lock.

The read/write spin lock function is similar to a common spin lock. We will not describe it here. We will list it in the following table.
Rw_lock_unlocked
Rw_lock_init (rwlock_t *)
Read_lock (rwlock_t *)
Read_unlock (rwlock_t *)
Read_lock_irq (rwlock_t *)
Read_unlock_irq (rwlock_t *)
Read_lock_irqsave (rwlock_t *, unsigned long)
Read_unlock_irqsave (rwlock_t *, unsigned long)
Write_lock (rwlock_t *)
Write_unlock (rwlock_t *)
Write_lock_irq (rwlock_t *)
Write_unlock_irq (rwlock_t *)
Write_lock_irqsave (rwlock_t *, unsigned long)
Write_unlock_irqsave (rwlock_t *, unsigned long)
Rw_is_locked (rwlock_t *)
Vi. cumbersome order
Seqlock is an optimization of read/write locks. If the sequence is cumbersome, the read Execution Unit will never be blocked by the write execution unit. That is to say, the read execution unit can continue reading when the write Execution Unit performs write operations on the shared resources that are otherwise poorly protected, instead of waiting for the write Execution Unit to complete the write operation, the write execution unit does not need to wait for all read execution units to complete the read operation.
However, the write Execution Unit and the write Execution Unit are mutually exclusive, that is, if a write execution unit is performing a write operation, where other write execution units must spin, it is not until the write execution unit is released.
If a write operation has already occurred to the read Execution Unit during the read operation, the read execution unit must re-read the data to ensure that the obtained data is complete, this kind of lock has a relatively low probability of simultaneous reading and writing, and the performance is very good, and it allows simultaneous reading and writing, thus greatly improving the concurrency,
Note that the sequence is limited by a limit, that is, the shared resource that must be protected does not contain pointers, because the write execution unit may invalidate the pointer, but if the read execution unit is about to access the pointer, it causes oops.
VII. semaphores
In Linux, semaphores are a sleep lock. If a task attempts to obtain an occupied semaphores, The semaphores will push them into a waiting queue and then sleep them, at this time, the processor can regain freedom to execute other code. When the process holding the semaphore releases the semaphore, the task in the waiting queue is awakened and the semaphore is obtained.
Semaphores, or flags, are the classic P/V primitive operations we learned in the operating system.
P: if the signal value is greater than 0, the semaphore value is decreased, and the program continues to execute. Otherwise, the sleep waiting semaphore is greater than 0.
V: The incremental semaphore value. If the incremental semaphore value is greater than 0, the waiting process is awakened.

The semaphore value determines how many processes can simultaneously enter the critical section. If the initial value of the semaphore starts 1, the semaphore is a mutex ). A non-zero-value semaphore greater than 1 can also be called a count semaphore (counting semaphore ). The semaphores used by common drivers are mutex semaphores.
Similar to the spin lock, the implementation of semaphores is also closely related to the architecture. The specific implementation is defined in the header file <ASM/semaphore. h>. For the x86_32 system, its definition is as follows:
Struct semaphore {
Atomic_t count;
Int sleepers;
Wait_queue_head_t wait;
};

The initial count value of semaphores is of the atomic_t type, which is an atomic operation type and also a kernel synchronization technology. It can be seen that semaphores are based on atomic operations. We will introduce atomic operations in detail in the atomic operations section below.

The use of semaphores is similar to a spin lock, including creation, acquisition, and release. We will first demonstrate the basic usage of semaphores:
Static declare_mutex (my_sem );
......
If (down_interruptible (& my_sem ))

{
Return-erestartsys;
}
......
Up (& my_sem)

The semaphore function interfaces in the Linux kernel are as follows:
Static declare_semaphore_generic (name, count );
Static declare_mutex (name );
Seam_init (struct semaphore *, INT );
Init_mutex (struct semaphore *);
Init_mutex_locked (struct semaphore *)
Down_interruptible (struct semaphore *);
Down (struct semaphore *)
Down_trylock (struct semaphore *)
Up (struct semaphore *)
• Initialize semaphores
Semaphore initialization includes static initialization and dynamic initialization. Static Initialization is used for static declaration and semaphore initialization.
Static declare_semaphore_generic (name, count );
Static declare_mutex (name );

You can use the following function to initialize a semaphore that is dynamically declared or created:
Seam_init (SEM, count );
Init_mutex (SEM );
Init_mutex_locked (struct semaphore *)

Apparently, functions with mutex start to initialize mutex semaphores. Locked indicates that the initial semaphores are locked.
• Use semaphores
After semaphores are initialized, we can use them.
Down_interruptible (struct semaphore *);
Down (struct semaphore *)
Down_trylock (struct semaphore *)
Up (struct semaphore *)

The down function will try to obtain the specified semaphore. If the semaphore has been used, the process enters an uninterrupted sleep state. Down_interruptible will cause the process to enter the interrupted sleep state. For details about the process status, we will detail the process management in the kernel.

Down_trylock tries to get the semaphore. If the acquisition succeeds, 0 is returned. If the acquisition fails, a non-0 is returned immediately.

When you exit the critical section, use the up function to release the semaphore. If the sleep queue on the semaphore is not empty, wake up one of the waiting processes.

8. read/write semaphores
Similar to the spin lock, semaphores also have read and write semaphores. The read/write semaphores API is defined in <Linux/rwsem. h> in the header file, its definition is also related to the architecture. Therefore, the specific implementation definition is in <ASM/rwsem. h> in the header file, the following is an example of x86:
Struct rw_semaphore {
Signed long count;
Spinlock_t wait_lock;
Struct list_head wait_list;
};

The first thing to note is that all read and write semaphores are mutex semaphores. A read lock is a shared lock that allows multiple read processes to hold the semaphore at the same time. However, a write lock is an exclusive lock and only one write lock can hold the mutex semaphores. Apparently, write locks are exclusive, including read rejection locks. Because the write lock is a shared lock, it allows multiple read processes to hold the lock. As long as no process holds the write lock, it will always successfully hold the lock, which will cause the write process to become hungry.

Before using the read/write semaphores, initialize them. As you may think, they are almost the same as the read/write spin lock. Let's take a look at the creation and initialization of the read/write semaphores:
// Static Initialization
Static declare_rwsem (rwsem_name );

// Dynamic Initialization
Static struct rw_semaphore rw_sem;
Init_rwsem (& rw_sem );

The read process obtains the semaphore to protect the data in the critical section:
Down_read (& rw_sem );
...
Up_read (& rw_sem );

The write process obtains the data in the critical section of semaphore protection:
Down_write (& rw_sem );
...
Up_write (& rw_sem );

For more information about the read/write semaphores API, see the following table:
# Include <Linux/rwsem. h>

Declare_rwset (name );
Init_rwsem (struct rw_semaphore *);
Void down_read (struct rw_semaphore * SEM );
Void down_write (struct rw_semaphore * SEM );
Void up_read (struct rw_semaphore * SEM );
Int down_read_trylock (struct rw_semaphore * SEM );
Int down_write_trylock (struct rw_semaphore * SEM );
Void downgrade_write (struct rw_semaphore * SEM );
Void up_write (struct rw_semaphore * SEM );

Like the spin lock, down_read_trylock and down_write_trylock attempt to obtain the semaphore. If the acquisition succeeds, 1 is returned; otherwise, 0 is returned. It is strange why the return value is opposite to the corresponding function of the semaphore. You must be careful when using this function.

9. Differences between spin locks and semaphores
In the driver, when multiple threads access the same resource at the same time (the global variable in the driver is a typical shared resource), it may lead to a "race State ", therefore, we must control the concurrency of shared resources. The most common method to solve Concurrency Control in Linux kernel is spin lock and semaphore (usually used as mutex lock ).

Spin locks are similar to semaphores, but not similar to semaphores. Similar to semaphores, spin locks are similar to semaphores in terms of functions. They are completely different in nature and implementation mechanism, but not in the same category.

The spin lock will not cause the caller to sleep. If the spin lock has been maintained by another execution unit, the caller will keep repeating and check whether the lock holder has released the lock, "Spin" means "in-situ rotation ". The semaphores cause the caller to sleep. They drag the process out of the running queue unless the lock is obtained. This is their "not class ".

However, either a semaphore or a spin lock can have at most one lock at any time point, that is, at most one execution unit can obtain a lock at any time point. This is their "similar ".

In view of the above features of the spin lock and semaphore, the spin lock is suitable for keeping the time very short, it can be used in any context; the semaphore is suitable for holding for a long time, it can only be used in the process context. If the protected shared resource is accessed only in the process context, the shared resource can be protected by semaphores. If the access time to the shared resource is very short, the spin lock is also a good choice. However, if the protected shared resource needs to interrupt context access (including the bottom half, namely the Interrupt Processing handle and the top half, namely the Soft Interrupt), the spin lock must be used.
The differences are summarized as follows:
1. Because the processes competing for semaphores will sleep while waiting for the lock to become available again, the semaphores are suitable for cases where the lock will be held for a long time.
2. On the contrary, when the lock is held for a short time, it is not appropriate to use the semaphore, because the time consumed by sleep may be longer than the full time occupied by the lock.
3. Because the execution thread will sleep during lock contention, the semaphore lock can only be obtained in the process context, because scheduling cannot be performed in the interrupt context (using the spin lock.
4. You can sleep when holding a semaphore (You may not need to sleep, of course), because it won't be deadlocked when other processes try to get the same semaphore, (because the process is just sleep, and you will continue to execute it ).
5. You cannot use spin locks when using semaphores, because you may sleep while waiting for semaphores, but do not allow sleep when holding spin locks.
6. The critical section of semaphore lock protection can contain code that may cause blocking, while the spin lock must be used to protect the critical section containing such code. Blocking means process switching, if another process tries to obtain the spin lock after the process is switched out, the deadlock will occur.
7. Unlike the spin lock, semaphores do not prohibit kernel preemption (kernel cannot be preemptible when the spin lock is held). Therefore, the Code holding the semaphores can be preemptible, this means that the semaphores do not have a negative impact on the scheduling wait time.
In addition to the synchronization mechanism described above, there are BKL (large kernel lock) and SEQ lock.
BKL is a global spin lock mainly used to facilitate the process from Linux's initial SMP to fine-grained locking mechanism.
The seq lock is used to read and write shared data. To implement this lock, you only need to rely on a Sequence Counter.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.