Linux spin lock

Source: Internet
Author: User

This article from the CSDN blog, reproduced please indicate the source: http://blog.csdn.net/yunsongice/archive/2010/05/18/5605264.aspx

Locking is a widely used synchronization technology. When the kernel control path must access the shared data structure or enter the critical section, you need to obtain a "Lock" for yourself ". The resources protected by the lock mechanism are very similar to those restricted in the room. When someone enters the room, the door is locked. If the kernel control path wants to access resources, it tries to get the key "Open the door ". It can be successful only when the resource is idle. Then, as long as it still wants to use this resource, the door will remain locked. When the kernel control path is released, the door is opened and another kernel control path can enter the room.

One of the Linux lock applications is called spin lock in a multi-processor environment ). If the kernel control path finds that the spin lock is "Open", it gets the lock and continues its own execution. On the contrary, if the kernel control path discovers that the lock is "locked" by the kernel control path running on another CPU, it will "Rotate" around it and execute a compact loop command repeatedly, wait until the lock is released.

The cyclic instruction of the spin lock indicates "busy ". Even if the waiting kernel control path has nothing to do (in addition to wasting time), it also runs on the CPU. However, the spin lock is usually very convenient, because many kernel resources only lock 1 millisecond of time segments; therefore, waiting for the release of the spin lock will not consume too much CPU time.

In general, kernel preemption is prohibited for each critical zone protected by the spin lock. In a single processor system, this lock does not act as a lock. The spin lock technology is only used to prohibit or enable kernel preemption. Note that kernel preemption is still effective because it does not enter the critical section during the spin lock busy period. Therefore, the Process waiting for the spin lock to be released may be replaced by a higher priority. This design is reasonable because the system cannot be deadlocked because the CPU usage is too long.

In Linux, each spin lock is represented in a spinlock_t structure:
Typedef struct {
Raw_spinlock_t raw_lock;
# If defined (CONFIG_PREEMPT) & defined (CONFIG_SMP)
Unsigned int break_lock;
# Endif
# Ifdef CONFIG_DEBUG_SPINLOCK
Unsigned int magic, owner_cpu;
Void * owner;
# Endif
# Ifdef CONFIG_DEBUG_LOCK_ALLOC
Struct lockdep_map dep_map;
# Endif
} Spinlock_t;

Typedef struct {
Volatile unsigned int slock;
} Raw_spinlock_t;

Two important fields are described as follows:

Slock: this field indicates the status of the spin lock. If the value is 1, it indicates the "No lock" status, and any negative number or 0 indicates the "Lock" status.
Break_lock: indicates that the process is busy waiting for a spin lock (this flag is used only when the kernel supports SMP and kernel preemption ).

The kernel provides six macros for initialization, testing, and setting spin locks. All these macros are based on atomic operations to ensure that the spin lock can be correctly updated even if multiple processes running on different CPUs attempt to modify the spin lock at the same time.

1. spin_lock_init -- initialize the spin lock and set the lock-> raw_lock of the spin lock to 1 (unlocked)

# Define spin_lock_init (lock )\
Do {\
Static struct lock_class_key _ key ;\
\
_ Spin_lock_init (lock), # lock, & __key );\
} While (0)

Void _ spin_lock_init (spinlock_t * lock, const char * name,
Struct lock_class_key * key)
{
# Ifdef CONFIG_DEBUG_LOCK_ALLOC
/*
* Make sure we are not reinitializing a held lock:
*/
Debug_check_no_locks_freed (void *) lock, sizeof (* lock ));
Lockdep_init_map (& lock-> dep_map, name, key, 0 );
# Endif
Lock-> raw_lock = (raw_spinlock_t) _ RAW_SPIN_LOCK_UNLOCKED;
Lock-> magic = SPINLOCK_MAGIC;
Lock-> owner = SPINLOCK_OWNER_INIT;
Lock-> owner_cpu =-1;
}

# Define _ RAW_SPIN_LOCK_UNLOCKED {1}
# Define SPINLOCK_MAGIC 0xdead4ead
# Define SPINLOCK_OWNER_INIT (void *)-1L)

2. spin_unlock -- set the spin lock to 1 (unlocked)

# If defined (CONFIG_DEBUG_SPINLOCK) | defined (CONFIG_PREEMPT) | \
! Defined (CONFIG_SMP)
# Define spin_unlock (lock) _ spin_unlock (lock)
# Else // Let's focus on the following.
# Define spin_unlock (lock) _ raw_spin_unlock (& (lock)-> raw_lock)
# Endif

Void _ lockfunc _ spin_unlock (spinlock_t * lock)
{
Spin_release (& lock-> dep_map, 1, _ RET_IP _);
_ Raw_spin_unlock (lock );
Preempt_enable ();
}

# Define _ raw_spin_unlock (lock) _ raw_spin_unlock (& (lock)-> raw_lock)

Static inline void _ raw_spin_unlock (raw_spinlock_t * lock)
{
_ Asm _ volatile __(
_ Raw_spin_unlock_string
);
}

# Define _ raw_spin_unlock_string \
"Movb $1, % 0 "\
: "+ M" (lock-> slock): "memory"

The spin_unlock macro releases the previous spin locks. The above code essentially executes the following Assembly Language commands:
Movb $1, slp-> slock
And then call preempt_enable () (if kernel preemption is not supported, preempt_enable () will do everything ). Note that the 80x86 microprocessor always executes write-only access in the memory atomically, so no lock bytes are needed.

3. spin_unlock_wait -- wait until the spin lock changes to 1 (unlocked)

# Define spin_unlock_wait (lock) _ raw_spin_unlock_wait (& (lock)-> raw_lock)
# Define _ raw_spin_unlock_wait (lock )\
Do {while (_ raw_spin_is_locked (lock) cpu_relax ();} while (0)
# Define _ raw_spin_is_locked (x )\
(* (Volatile signed char *) (& (x)-> slock) <= 0) // if it is greater than 0, it is true, indicating that it is not locked, it jumps out of the while LOOP
# Define cpu_relax () rep_nop () // execute an empty command in the Loop:
Static inline void rep_nop (void)
{
_ Asm _ volatile _ ("rep; nop": "memory ");
}

4. spin_is_locked () -- if the spin lock is set to 1 (unlocked), 0 is returned; otherwise, 1 is returned.

# Define spin_is_locked (lock) _ raw_spin_is_locked (& (lock)-> raw_lock)
# Define _ raw_spin_is_locked (x )\
(* (Volatile signed char *) (& (x)-> slock) <= 0)

5. spin_trylock () -- sets the spin lock to 0 (locked). If the original lock value is 1, 1 is returned; otherwise, 0 is returned.
# Define spin_trylock (lock) _ cond_lock (_ spin_trylock (lock ))
Int _ lockfunc _ spin_trylock (spinlock_t * lock)
{
Preempt_disable ();
If (_ raw_spin_trylock (lock )){
Spin_acquire (& lock-> dep_map, 0, 1, _ RET_IP _);
Return 1;
}

Preempt_enable ();
Return 0;
}

6. spin_lock -- lock: loop until the spin lock changes to 1 (unlocked). Then, set the spin lock to 0 (locked)

Spin_lock is the most important macro. First, we can see that in the include/spinlock. h header file, there are:

# If defined (CONFIG_SMP) | defined (CONFIG_DEBUG_SPINLOCK)
# Include <linux/spinlock_api_smp.h> // multi-processor status
# Else
# Include <linux/spinlock_api_up.h> // single processor status
# Endif

# Define spin_lock (lock) _ spin_lock (lock)
# Ifdef _ LINUX_SPINLOCK_API_UP_H
# Define _ spin_lock (lock) _ LOCK (lock) // single processor situation
# Else
Note that there is a # if! Defined (CONFIG_PREEMPT) |! Defined (CONFIG_SMP) | \
Defined (CONFIG_DEBUG_LOCK_ALLOC)
Don't worry about it, because the above English comments are clearly written. This code means that even if no kernel preemption or SMP is defined, or spin lock debugging, As long as lockdep is activated, that is, the # ifdef CONFIG_DEBUG_LOCK_ALLOC that we saw in the definition of the spinlock_t just now will assume that the entire lock remains disconnected during the debugging process. This sentence # if means this. Do not define kernel preemption, SMP, or spin lock debugging. Remember.
/*
* If lockdep is enabled then we use the non-preemption spin-ops
* Even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
* Not re-enabled during lock-acquire (which the preempt-spin-ops do ):
*/
// Multi-processor condition, and allow kernel preemption:
Void _ lockfunc _ spin_lock (spinlock_t * lock)
{
// Preemption prohibited
Preempt_disable ();
// This function is an empty function when no spin lock is defined for debugging. We don't care about it.
Spin_acquire (& lock-> dep_map, 0, 0, _ RET_IP _);
// Equivalent to _ raw_spin_lock (lock)
LOCK_CONTENDED (lock, _ raw_spin_trylock, _ raw_spin_lock );
}

When no spin lock debugging is defined, the LOCK_CONTENDED macro is defined as follows:
# Define LOCK_CONTENDED (_ lock, try, lock )\
Lock (_ lock)

We can see that the _ raw_spin_lock macro is called (in include/linux/spinlock. h ):
# Define _ raw_spin_lock (lock) _ raw_spin_lock (& (lock)-> raw_lock)

So, locate include/asm-i386/Spinlock. h
Static inline void _ raw_spin_lock (raw_spinlock_t * lock)
{
Asm (_ raw_spin_lock_string: "+ m" (lock-> slock): "memory ");
}

Expand:
# Define _ raw_spin_lock_string \
"\ N1: \ t "\
// Atomic subtraction. If the value is not negative, the system jumps to 3f. If the value is not negative, the system exits without any command after 3f.
LOCK_PREFIX "; decb % 0 \ n \ t "\
"Jns 3f \ n "\
"2: \ t "\
// Repeat nop. nop is a small latency function of x86 and is null.
"Rep; nop \ n \ t "\
// Compare the values of 0 and lock-> slock. If the value of lock-> slock is not greater than 0, jump to label 2, that is, continue to execute nop repeatedly.
"Cmpb $0, % 0 \ n \ t "\
"Jle 2b \ n \ t "\
// If the lock-> slock is greater than 0, jump to the label 1 and re-judge the slock Member of the lock.
"Jmp 1b \ n "\
"3: \ n \ t"

In the above function, it may be hard to understand "jmp 1b \ n. In our general idea, we get a lock and reduce its value by 1; when releasing the lock, we add its value to 1; in fact, in the implementation of the spin lock, the lock-> slock has only two possible values, one being 0. one is 1. When releasing a lock, it is not to add lock-> slock to 1, but to 1. See the detailed analysis in the previous spin lock release code spin_unlock.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.