Implementation of the Linux 2.6 mutex-source code Analysis

Source: Internet
Author: User
Tags mutex volatile

http://blog.csdn.net/tq02h2a/article/details/4317211

Look at the Linux 2.6 kernel source, the following code to analyze the X86 architecture, the implementation of mutual exclusion lock principle.

Code Analysis

1. First describe the data structure used by the mutex:
struct Mutex {
Reference counter
1: Can be exploited.
Less than or equal to 0: The lock has been acquired and needs to wait
atomic_t count;

The spin lock type ensures that the waiting queue access is secure under multiple CPUs.
spinlock_t Wait_lock;

Wait for the queue, and if the lock is fetched, the task hangs on this queue and waits for dispatch.
struct List_head wait_list;
};

2. Mutex Lock and lock function
void inline __sched mutex_lock (struct mutex *lock)
The macro was called:
__mutex_fastpath_lock (&lock->count, __mutex_lock_slowpath);

Definition of macro:
In the mutex data structure, the reference counter is reduced by 1, and returns if it is not negative.
If it is negative, you need to call the function: __mutex_lock_slowpath, let's do it again.
To analyze this function, let's analyze the macro first.
#define __mutex_fastpath_lock (count, FAIL_FN)/
do {/
unsigned int dummy; /
/
Checking the validity of parameter types
Typecheck (atomic_t *, count); /
Typecheck_fn (void (*) (atomic_t *), FAIL_FN); /
/
Input, output register is eax, input is count, output is dummy, the value of EAX is reduced by 1
ASM volatile (Lock_prefix "Decl (%%eax)/n"/
"Jns 1f/n"/
If the minus is negative, call the callback function and try to block the process
"Call" #fail_fn "/n"/
"1:/n"/
: "=a" (dummy)/
: "A" (count)/
: "Memory", "ECX", "edx"); /
} while (0)

3. Callback function
static noinline int __sched __mutex_lock_killable_slowpath (atomic_t *lock_count)
{
Gets the structure address through the member address of the struct
struct Mutex *lock = container_of (lock_count, struct mutex, count);

This function is described in detail later
Return __mutex_lock_common (lock, task_killable, 0, _RET_IP_);
}

4. Where the blocking process actually acquires the lock
static inline int __sched
__mutex_lock_common (struct mutex *lock, long state, unsigned int subclass,
unsigned long IP)
{
Gets the address of the task_struct of the current process
struct Task_struct *task = current;
struct Mutex_waiter waiter;
unsigned int old_val;
unsigned long flags;

The waiting queue of the lock is added spin lock to prevent the situation of multiple CPUs.
Spin_lock_mutex (&lock->wait_lock, flags);

Add the task to the wait queue for the lock
List_add_tail (&waiter.list, &lock->wait_list);
Waiter.task = task;

A assembly instruction is used to pay the value of Count, Lock->count=-1, to ensure that the operation is atomic on a CPU
Old_val = Atomic_xchg (&lock->count,-1);
If the value before Lock->count is 1, it means that the lock can be acquired
if (Old_val = = 1)
Goto done;

lock_contended (&lock->dep_map, IP);

for (;;) {
In this place, try to get the lock, the way of processing as above.
Old_val = Atomic_xchg (&lock->count,-1);
if (Old_val = = 1)
Break

If the process is interruptible, or if the process is kiilable, if a signal is available
is delivered to the task, the process is removed from the wait queue
if (unlikely (state = = Task_interruptible &&
Signal_pending (Task)) | |
(state = = Task_killable &&
Fatal_signal_pending (Task))) {
Mutex_remove_waiter (lock, &waiter,
Task_thread_info (Task));
Mutex_release (&lock->dep_map, 1, IP);
Spin_unlock_mutex (&lock->wait_lock, flags);

Debug_mutex_free_waiter (&waiter);
return interrupted by signal
Return-eintr;
}
__set_task_state (task, state);

If it is not yet available, the spin lock is lifted and the spin lock is obtained again when returning from schedule.
Repeat the action as above.
Spin_unlock_mutex (&lock->wait_lock, flags);
Schedule ();
Spin_lock_mutex (&lock->wait_lock, flags);
}

Indicates that a lock has been acquired
Done
Lock_acquired (&LOCK->DEP_MAP);
Remove the task from the wait queue
Mutex_remove_waiter (lock, &waiter, Task_thread_info (Task));
Debug_mutex_set_owner (lock, Task_thread_info (Task));

If the wait queue is empty, set Lock->count to 0
if (Likely (List_empty (&lock->wait_list)))
Atomic_set (&lock->count, 0);

Spin_unlock_mutex (&lock->wait_lock, flags);

Debug_mutex_free_waiter (&waiter);

return 0;
}

5. Unlocking process
void __sched mutex_unlock (struct mutex *lock)
{
Lock->count will change from 0 to 1 after unlocking
__mutex_fastpath_unlock (&lock->count, __mutex_unlock_slowpath);
}

The macro is an additional 1 operation on the reference counter, if the addition is less than or equal to 0, indicating that the waiting queue
There is also a task to acquire the lock. Call the __mutex_unlock_slowpath function.
#define __mutex_fastpath_unlock (count, FAIL_FN)/
do {/
unsigned int dummy; /
/
Typecheck (atomic_t *, count); /
Typecheck_fn (void (*) (atomic_t *), FAIL_FN); /
/
ASM volatile (Lock_prefix "incl (%%eax)/n"/
"JG 1f/n"/
"Call" #fail_fn "/n"/
"1:/n"/
: "=a" (dummy)/
: "A" (count)/
: "Memory", "ECX", "edx"); /
} while (0)

The function calls the __mutex_unlock_slowpath function.
static noinline void
__mutex_unlock_slowpath (atomic_t *lock_count)
{
__mutex_unlock_common_slowpath (Lock_count, 1);
}

static inline void
__mutex_unlock_common_slowpath (atomic_t *lock_count, int nested)
{
Gets the structure address through the member address of the struct
struct Mutex *lock = container_of (lock_count, struct mutex, count);
unsigned long flags;

Add spin lock for waiting queue
Spin_lock_mutex (&lock->wait_lock, flags);
Mutex_release (&lock->dep_map, nested, _RET_IP_);
Debug_mutex_unlock (lock);

if (__mutex_slowpath_needs_to_unlock ())
Atomic_set (&lock->count, 1);

First look at the wait queue is not empty, if it is empty, do not need to do any processing, otherwise
Wake up the first team on the wait queue
if (!list_empty (&lock->wait_list)) {
struct Mutex_waiter *waiter =
List_entry (Lock->wait_list.next,
struct mutex_waiter, list);

Debug_mutex_wake_waiter (lock, waiter);

Wake_up_process (Waiter->task);
}

Debug_mutex_clear_owner (lock);

Spin_unlock_mutex (&lock->wait_lock, flags);
}

Summary: The implementation of a mutex is actually a lock that maintains a wait queue and a reference counter when acquiring a lock
Before, the reference counter is reduced by 1, and if it is non-negative, the lock can be obtained into the critical section. Otherwise, the task needs to be
Hangs on the waiting pair column.

Implementation of the Linux 2.6 mutex-source code Analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.