Design and Implementation of Linux kernel Reading Notes (9)-Introduction to kernel Synchronization

Source: Internet
Author: User

When shared resources (sharing a file, a memory, etc.) exist, a synchronization mechanism is introduced to prevent data inconsistency of shared resources during concurrent access.

Main content:

  1. Synchronization Concept
  2. Synchronization Method-Lock
  3. Deadlock
  4. Lock Granularity

 

1. Concept of Synchronization

Before learning about synchronization, first understand the other two concepts:

  • The critical section, also known as the critical section, is the code segment for accessing and operating shared data.
  • Competition Condition-when two or more threads are executed simultaneously in the critical section, the competition condition is formed.

 

The so-called synchronization actually prevents the formation of competition conditions in the critical section.

If an atomic operation is performed in the critical section (that is, the operation will not be interrupted before it is completed), then there will naturally be no competition.

However, in practice, the code in the critical section is usually not that simple, so a lock mechanism is introduced to maintain synchronization.

 

2. Synchronization Method-Lock

In order to lock the critical section and ensure data synchronization in the critical section, we first need to know which situations in the kernel will produce concurrency.

 

Reasons for competition in the kernel:

Reasons for Competition

Description

Interrupted Interruption occurs at any time, and the code executed at any time is interrupted. If the interrupted code and the interrupted Code are in the same critical section, the competition conditions are met.
Soft Interrupt and tasklet The Soft Interrupt and tasklet will also be awakened by the kernel for execution at any time, and the code being executed will be interrupted like the interrupt.
Kernel preemption The kernel is preemptible. When preemption occurs, if the preemptible thread and the preemptible thread are in the same critical section, the competition conditions are met.
Sleep and user space Synchronization After a user's process is sleep, the scheduler will wake up a new user process. The new user process and sleep process may be in the same critical section.
Symmetric multiple Processing Two or more processors can execute the same code at the same time

 

To avoid competition when writing kernel codeBefore writing code, consider where the critical section is and how to lock it..

It is very difficult to lock the code after writing it. It may lead to some code rewriting.

 

When writing the kernel code, always remember the following issues:

  1. Is this data global? Can other threads access the current thread?
  2. Will this data be shared in the process context or interrupt context? Does it need to be shared between two different interrupt handlers?
  3. Can a process be preemptible when accessing data? Will the scheduled new program access the same data?
  4. Will the current process be sleep (or congested) on some resources? If so, what status will it put the shared data in?
  5. How can we prevent data from getting out of control?
  6. What will happen if this function is scheduled on another processor?

 

3. deadlock

A deadlock means that all threads are waiting to release resources, and no one can continue to execute.

The following simple rules can help us avoid deadlocks:

  1. If there are multiple locks, try to make sure that each thread is locked in the same order and unlocked in the reverse order of the locks. (Unlock a-> B-> C, unlock C-> B->)
  2. Prevent hunger. That is, set a time-out period to prevent waiting.
  3. Do not repeatedly request the same lock.
  4. The design should be simple. The more complicated the Locking Scheme is, the more likely a deadlock will occur.

 

4. Lock Granularity

When locking, we should not only avoid deadlocks, but also consider the granularity of locking.

The lock granularity has a great impact on the scalability of the system. When locking, consider whether the lock will be frequently used by multiple threads.

If the lock may be frequently used, the granularity of the lock needs to be refined.

The refined lock improves the performance of the multi-processor.

 

For example, to lock a linked list, three threads A, B, and C frequently access the linked list.

When thread a, thread B, and thread C access the linked list at the same time, if thread a acquires the lock, thread B and thread C can only access the linked list after thread a releases the lock.

 

If three threads A, B, and C access different nodes of the linked list (for example, a modifies node Lista, B deletes node listb, and C append node listc ),

And the three nodes are not consecutive, so the three threads will not run at the same time.

 

In this case, you can refine the lock, remove the lock added to the linked list, and add the lock to each node of the linked list. (Refined lock granularity)

In the above case, the threads A, B, and C can access their respective nodes at the same time, especially in the case of multi-processor, the performance will be significantly improved.

 

The last thing to note is that the finer the granularity of the lock, the higher the system overhead, and the more complicated the program. Therefore, there is no need to refine the lock that is not frequently used.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.