"Linux kernel design and implementation" reading notes (ix)-Kernel synchronization Introduction

Source: Internet
Author: User

When there is a shared resource (sharing a file, a piece of memory, and so on), the synchronization mechanism is introduced to prevent the data inconsistency of the shared resource during concurrent access.

Main content:

    1. The concept of synchronization
    2. Methods of synchronization-locking
    3. Dead lock
    4. The size of the lock

1. The concept of synchronization

Learn about the other 2 concepts before synchronizing:

    • A critical section-also known as a critical segment-is the code snippet that accesses and operates shared data.
    • Competition conditions-when 2 or more or 2 threads are executed simultaneously in a critical section, the competition condition is formed.

The so-called synchronization, in fact, prevent the formation of competitive conditions in the critical area.

If the critical section is atomic (that is, the entire operation is not interrupted before it is completed), then naturally there is no competitive condition.

However, in practical applications, the code in the critical section is often not so simple, so in order to maintain synchronization, the locking mechanism is introduced.

2. Method of synchronization-locking

In order to lock the critical section and ensure the synchronization of the critical section data, first understand which cases in the kernel will generate concurrency.

Reasons why the kernel is causing the race condition:

Competitive reasons

Description

Interrupt Interrupts can occur at any time, and the code that is currently executing is interrupted at any time. If the interrupted and interrupted code is in the same critical section, a race condition is created
Soft interrupts and Tasklet Soft interrupts and Tasklet will also be executed by the kernel at any time, as well as interrupting code that is executing as interrupts
Kernel preemption The kernel has preemption, and when preemption occurs, a race condition is created if the preempted thread and the preempted thread are in the same critical section.
Synchronization of sleep and user space After the user process sleeps, the scheduler wakes up a new user process, and the new user process and the sleep process may be in the same critical section
Symmetric multi-processing 2 or more processors can execute the same code at the same time

To avoid race conditions when writing kernel code, consider where the critical section is and how to lock it before writing the code .

It is very difficult to add locks after writing the code, and it is likely that some code overrides will be caused.

When writing kernel code, always keep these questions in mind:

    1. Is this data global? Can other threads access it other than the current thread?
    2. Will this data be shared in the context of the process or in the interrupt context? Is it to be shared in two different interrupt handlers?
    3. Can a process not be preempted when accessing data? Will the new program being dispatched access the same data?
    4. Will the current process sleep (or block) on some resources, and if so, what state does it have for shared data?
    5. How to prevent data runaway?
    6. What happens if this function is dispatched on another processor?

3. Deadlock

A deadlock is when all threads are waiting to release resources from each other, causing no one to continue.

Here are some simple rules that can help us avoid deadlocks:

    1. If there are multiple locks, try to ensure that each thread is locked in the same order and unlocked in the reverse order of the lock. (i.e. lock a->b->c, unlock c->b->a)
    2. Prevent the occurrence of hunger. That is, set a time-out to prevent waiting.
    3. Do not request the same lock repeatedly.
    4. Design should be simple. The more complex the lock scheme, the more likely it is to be deadlocked.

4. Size of the lock

In addition to lock, not only to avoid deadlocks, but also to consider the size of the lock.

The granularity of the lock has a great impact on the scalability of the system, and when locking, consider whether the lock will be frequently used by multiple threads.

If a lock is likely to be subject to frequent contention, you need to refine the granularity of the lock.

Finer locks can improve performance in multi-processor scenarios.

For example, for example, to lock a linked list, while there are a,b,c 3 threads frequently access this list.

So when a,b,c 3 threads access the list at the same time, if a gets a lock, then the B,C thread can only wait for a to release the lock before accessing the linked list.

If A,b,c 3 threads are accessing different nodes of this list (for example, A is a modified node lista,b is a delete node listb,c is an append node LISTC),

And these 3 nodes are not contiguous, then 3 threads running at the same time are not problematic.

In this case, you can refine the lock, remove the lock added to the list, and change it to add the lock to each node in the list. (i.e. refinement of lock granularity)

In this case, the A,b,c 3 threads can access the respective nodes at the same time, especially in multiprocessor cases, with significantly improved performance.

Finally, there is a point to remind that the finer the granularity of the lock, the larger the system overhead, the more complex the program, so there is no need to refine the lock for contention that is not very frequent.

"Linux kernel design and implementation" reading notes (ix)-Kernel synchronization Introduction

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.