Linux kernel design and implementation-kernel synchronization and Linux kernel Synchronization

Source: Internet
Author: User

Linux kernel design and implementation-kernel synchronization and Linux kernel Synchronization
Kernel Synchronization
Synchronization Introduction
Synchronization Concept

Critical section: it is also called a critical section, which is a code segment for accessing and operating shared data.

Competition Condition: When two or more threads are executed simultaneously in the critical section, the competition condition is formed.

The so-called synchronization actually prevents the formation of competition conditions in the critical section.

If an atomic operation is performed in the critical section (that is, the operation will not be interrupted before it is completed), then there will naturally be no competition. However, in practice, the code in the critical section is usually not that simple, so a lock mechanism is introduced to maintain synchronization. However, some locks may occur.

Conditions for deadlock: one or more execution threads and one or more resources are required. Each thread is waiting for one of these resources, but all resources are occupied. Therefore, threads wait for each other, but they will never release the occupied resources. As a result, no thread can continue and a deadlock occurs.

Self-deadlock: if an execution thread tries to obtain a lock that it already holds, it has to wait for the lock to be released. But because it is busy waiting for this lock, it will never have the opportunity to release the lock, resulting in a deadlock.

Starvation is a phenomenon where a thread cannot get the required resources for a long time and cannot be executed.

 

Causes of concurrency

Interrupt-the interrupt can occur asynchronously at almost any time, that is, the code that is currently running may be interrupted at any time.

Soft Interrupt and tasklet-the inner nuclear energy can wake up at any time or schedule the interrupt and tasklet to interrupt the code currently being executed.

Kernel preemption-because the kernel is preemptible, tasks in the kernel may be preemptible by another task.

Sleep and user space synchronization-processes executed in the kernel may be sleep, which will wake up the scheduler and lead to scheduling a new user process for execution.

Symmetric multiple processing-two or more processors can execute code simultaneously.

Simple rules to avoid deadlocks

The lock order is the key. When using nested locks, the lock must be obtained in the same order, which can prevent deadlocks of the fatal hug type. It is best to record the lock order so that others can use it in this order.

Prevent hunger. Determine whether the execution of this code will end. If A does not happen, will B keep waiting?

Do not repeatedly request the same lock.

The more complicated the Locking Scheme, the more likely it will lead to deadlocks. --- The design should be simple.

Lock Granularity

The lock granularity is used to describe the data size of lock protection. A thick lock protects large data blocks, such as all data structures of a sub-system. A fine lock protects small data blocks, such as an element in a big data structure.

When locking, we should not only avoid deadlocks, but also consider the granularity of locking.

The lock granularity has a great impact on the scalability of the system. When locking, consider whether the lock will be frequently used by multiple threads.

If the lock may be frequently used, the granularity of the lock needs to be refined.

The refined lock improves the performance of the multi-processor.


Synchronization Method
Atomic operation

An atomic operation is an operation that is not interrupted by other code paths during execution. The kernel code can safely call them without interruption.

Atomic operations include integer atomic operations and bit atomic operations.

Spinlock

The spin lock feature is that when a thread acquires the lock, other threads trying to obtain the lock wait until the lock is available again.

Since the thread continuously acquires this lock, it will cause a waste of CPU processing time. Therefore, it is best to use the spin lock in the critical section that can be processed quickly.

Note the following when using a spin lock:

1. The spin lock cannot be recursive. recursive requests for the same spin lock will lock itself.

2. Before obtaining the spin lock, the thread must disable the interruption on the current processor. (To prevent the thread that gets the lock from competing with the interrupt) for example, after the current thread obtains the spin lock, the thread is interrupted by the interrupt handler in the critical section, and the interrupt handler also needs to obtain the lock, the interrupt handler waits for the current thread to release the lock, and the current thread also waits for the interrupted execution to run the code of the critical section and lock release.

Take special caution when using spin locks during operations that interrupt processing the lower half:

1. when processing the lower half and the process context share data, the lower half can seize the code of the process context, so the process context must disable the execution of the lower half before locking the shared data, the lower half can be executed again when unlocking.

2. when the interrupt handler (upper half) and lower half process shared data, the interrupt handler (upper half) can preemptively execute the lower half, therefore, the lower half must prohibit the Interrupt Processing (the upper half) before locking the shared data, and then allow the interrupted execution when unlocking.

3. The same tasklet cannot run at the same time, so shared data in the same tasklet does not need to be protected.

4. When data is shared in different types of tasklets, when one of the tasklets acquires the lock, the execution of other tasklets is not prohibited, because there is no mutual preemption of tasklets on the same processor.

5. When sharing data, soft interruptions of the same type or non-same type do not need to be disabled in the lower half, because there will be no soft interruptions on the same processor to compete for each other.

Read-write spin lock

If the data protected by the critical section is readable and writable, concurrent read operations are supported as long as no write operation is performed. This requirement only requires that the write operation be mutually exclusive. If you still use the spin lock, it is obviously unable to meet this requirement (it is a waste of read operations ). Therefore, the kernel provides another kind of lock-read/write spin lock. read Spin locks are also called shared spin locks, and write spin locks are also called exclusive spin locks.

The read/write spin lock is a lock mechanism with a smaller granularity than the spin lock. It retains the concept of "Spin", but in terms of write operations, there can only be one write process at most, in terms of reading operations, there can be multiple read execution units at the same time. Of course, read and write operations cannot be performed at the same time.

Spin locks provide a fast and simple way to achieve this. If the lock takes a long time and the Code does not sleep, using the spin lock is the best choice. If the lock may take a long time or the Code may sleep when holding the lock, it is best to use semaphores to complete the lock function.

Semaphores

In Linux, semaphores are a sleep lock. If a task attempts to obtain an occupied semaphores, The semaphores will push them into a waiting queue and then sleep them, at this time, the processor can regain freedom to execute other code. When the process holding the semaphore releases the semaphore, the task in the waiting queue is awakened and the semaphore is obtained.

1) because the process of contention for semaphores will sleep while waiting for the lock to become available again, the semaphores are suitable for cases where the lock will be held for a long time. On the contrary, when the lock is held for a short time, it is not appropriate to use semaphores. Because sleep, maintenance waiting queue, and wake-up may take longer than the full time occupied by the lock.

2) because the execution thread will sleep during lock contention, the semaphore lock can only be obtained in the process context, because the interruption context cannot be scheduled.
3) you can sleep when holding a semaphore because other processes attempt to obtain the same semaphore will not be deadlocked (because the process is just sleep, will continue execution ).

4) You cannot use the spin lock while occupying the semaphore. Because you may sleep while waiting for the semaphore, and you are not allowed to sleep when holding the spin lock.

5) semaphores allow any number of lock holders at the same time, and the spin lock allows a task to hold it at most at a time. The reason is that the semaphore has a Count value. For example, if the Count value is 5, five threads can access the critical zone at the same time. If the initial value of semaphores starts 1, this semaphores are MUTEX ). A non-zero-value semaphore greater than 1 can also be called a count semaphore (counting semaphore ). The semaphores used by common drivers are mutex semaphores.

Semaphores support two atomic operations: P/V primitive operations (also called down operations and up operations ):

P: if the signal value is greater than 0, the semaphore value is decreased, and the program continues to execute. Otherwise, the sleep waiting semaphore is greater than 0.

V: The incremental semaphore value. If the incremental semaphore value is greater than 0, the waiting process is awakened.

The down operation has two versions, which are available for sleep interruption and sleep interruption.

Read-write semaphore

The relationship between a read/write semaphore and a semaphore is similar to that between a read/write spin lock and a general spin lock.

The read/write semaphores are binary semaphores, that is, the maximum count value is 1. When a reader is added, the counter remains unchanged. When a writer is added, the counter is reduced by one. That is to say, there is only one writer in the critical section protected by the read/write semaphores, but there can be multiple readers.

All read-write locks are not interrupted by the signal, so they only have one version of down operation.

Understanding when to use spin locks and semaphores is very important for writing good code, but in most cases, it does not require much consideration, because spin locks can only be used in interrupt context, however, only semaphores can be used during Task sleep.


Complete variable

Recommended locking method

Lock with low overhead

Use spin lock first

Short-term lock

Use spin lock first

Long-term lock

Use semaphores first

Interrupt context lock

Use spin lock

Sleep needed to hold the lock

Use semaphores

Complete variable

If a task in the kernel needs to send a signal to notify another task of a specific event, using the completion variable is a simple way to synchronize the two tasks. If a task needs to execute some work, the other task will wait for the Completion variable. After the task is completed, the variable of completion is used to wake up the waiting task. For example, when a child process is executed or exited, The vfork () system calls the parent process to wake up with the completion variable.

Seq lock)

This lock provides a very simple mechanism for reading and writing shared data. To implement this lock, a Sequence Counter is primarily used. When suspicious data is written, a lock is obtained and the sequence value increases. Before and after reading data, the serial number is read. If the read serial number value is the same, it indicates that the read operation has not been interrupted by the write operation. In addition, if the read value is an even number, it indicates that the write operation has not occurred (you must understand that because the initial value of the lock is 0, the write lock will make the value odd and even when released ).

When multiple readers and a few writers share a lock, the seq lock helps provide a very lightweight and scalable look. However, the seq lock is more advantageous to the writer. As long as there are no other writers, the write lock can always be obtained successfully. The suspended writer continuously loops the read operation (in the previous example) until no writer holds the lock.

Preemption prohibited

Because the kernel is preemptible, processes in the kernel may stop at any time so that another process with higher priority can run. This means that a task and a preemptible task may run in the same critical section. To avoid this situation, the kernel preemption Code uses a spin lock to mark non-preemption areas (which can prevent true concurrency and kernel preemption on multi-processor machines. If a spin lock is held, the kernel cannot be preemptible.

In reality, in some cases (you do not need to simulate the real concurrency on the multi-processor machine, but you need to prevent kernel preemption) You do not need to spin the lock, but you still need to disable kernel preemption. To solve this problem, you can use preempt_disable to disable kernel preemption. This is a nested function that can be called any time. Each call must have a corresponding preempt_enable call. After the last preempt_enable is called, the kernel is occupied again.

Order and barrier

For a piece of code, the compiler or processor may optimize the execution sequence during compilation and execution so that the execution sequence of the Code is different from the code we write.

In general, this is no problem, but under the concurrency condition, the obtained value may be inconsistent with the expected value, such as the following code:

/** Variables A and B shared by thread a and thread B * initial values a = 1, B = 2 */int a = 1, B = 2; /** assume the operations on A and B in thread a */void Thread_A () {a = 5; B = 4 ;} /** assume that the operations on a and B in thread B */void Thread_ B () {if (B = 4) printf ("a = % d \ n ", a );}

Due to the optimization of the compiler or processor, the assignment order in thread A may be that a is assigned A value before B is assigned a value.

Therefore, if B = 4 in thread A is finished and a = 5 is finished, and thread B starts to execute before execution, thread B prints the initial value of.

This is inconsistent with our expectation. We expect that a is assigned a value before B, so thread B either does not print the content. If it is printed, the value of a should be 5.

 

In some concurrent cases, a series of barrier methods are introduced to prevent compiler and processor optimization to ensure the execution sequence of the Code.

Method

Description

RMB ()

Prevents reordering of loading actions across barriers

Read_barrier_depends ()

Prevents data dependency-based loading actions that span barriers from being reordered

Wmb ()

Block Storage actions that span barriers from being reordered

Mb ()

Prevents loading and storage actions across barriers from being reordered

Smp_ RMB ()

Provides the RMB () function on SMP and the barrier () function on UP.

Smp_read_barrier_depends ()

Provides the read_barrier_depends () function on SMP and the barrier () function on UP.

Smp_wmb ()

Wmb () on SMP and barrier () on UP

Smp_mb ()

Provides the mb () function on SMP and the barrier () function on UP.

Barrier ()

Prevents compilers from crossing barriers to optimize loading or storage operations

To make the preceding Small Example run correctly, use the function in the preceding table to modify the function of thread:

/** Assume the operations on A and B in thread a */void Thread_A () {a = 5; mb ();/** mb () ensure that all the operations on loading and storing values before * mb () code are completed before loading and storing values (value is 4) for B (that is, a = 5; completed) * as long as the assignment of a is completed before the assignment of B, the execution result of thread B is as expected */B = 4 ;}

Summary:

From http://www.cnblogs.com/wang_yb/archive/2013/05/01/3052865.html

 

 

 

Refer:

Http://www.cnblogs.com/wang_yb/archive/2013/05/01/3052865.html

Http://www.cnblogs.com/pennant/archive/2012/12/28/2833383.html

Linux kernel design and implementation


Linux kernel design and implementation

I have a good book. I have a linux kernel design comparison theory to describe some basic concepts of the operating system. However, in combination with the linux OS, we can grasp the linux kernel design concept in general, for a deeper understanding of the Linux kernel, it is explained in detail that the design and implementation of the kernel references a lot of code, which is relatively small and complex and should be analyzed in detail. They are quite good. You can first look at the Linux kernel design, but you need to combine the code analysis, or it will be difficult to get started, it is difficult to understand in depth, if there is a certain level, you can start.

A deep understanding of the design and implementation of Linux kernel and Linux Kernel

Both of them are good. Study hard. The English version is better. Of course, your English skills are good !!!
 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.