Lock unlocking for UNIX operating systems: Waiting for events and awakening

Source: Internet
Author: User

The basic idea of locking and unlocking is that when a process enters the critical section, it will hold a certain type of lock. in UNIX, it is generally semaphore, and in Linux it is generally semaphores, atomic weights, or spinlock ). When other processes attempt to lock the lock in the critical section when the lock is not released, it will be set to sleep and placed in the queue of processes waiting for the lock with a certain priority ).

When the lock is released, that is, the unlock event occurs, the kernel searches for a process from the process priority queue waiting for the lock and sets it to the ready state, waiting for schedule ). In system v, wait for an event to be called sleepsleep on an event), So sleep will be used in the following sections ). Waiting for an event can also become waiting for a lock. Note: sleep and sleep () in this article are different in system calls)

The implementation of the system maps a group of events to a group of kernel virtual address locks), and the event does not differentiate how many processes are waiting. This means two irregular things:

1. When an event occurs, a group of processes waiting for the event will be awakened rather than simply waking up a process), and The status will be set to ready-to-run ).At this time, the kernel selects schedule) a process to execute, because the system v kernel is not a preemptible Linux kernel can be preemptible), so other processes will be ready for scheduling, or enter sleep again because the lock may be held by the execution process, and the execution process is sleep due to waiting for other events), or wait for other processes to be preemptible in the user State.

2. Multiple events are mapped to the same address lock ).Assume that events e1 and e2 are mapped to the same address lock.) addr, a group of processes are waiting for e1, and a group of processes are waiting for e2. The events they wait for are different, but the corresponding locks are the same. If e2 happens, all the processes waiting for e2 will be awakened to the ready state. Because e1 does not occur, the lock addr will not be released, and all the Awakened processes will return to the sleep state. It seems that an event corresponds to an address, which will improve efficiency, but in fact, because system v is a non-preemptible kernel, and this many-to-one ing is very small, in addition, running processes will soon release resources before other processes are scheduled. Therefore, this ing will not significantly reduce the performance.

The following describes the sleep and wakeup algorithms.

// Pseudocode

Sleep (address event), priority)

Return Value: 1 is returned when a signal that can be captured by a process occurs. If a signal that cannot be captured by a process occurs, the longjmp algorithm is returned. Otherwise, 0 is returned.

     
      
{Increase the processor execution level to disable all interruptions; // avoid the race condition from setting the Process status to sleep; Add the process to the sleep hash Queue Based on the event; // generally, each event has a waiting queue for sleep address events) and the input priority is stored in the progress table; if (the wait is an uninterrupted wait) // There are generally two types of sleep states: interruptible and non-disruptive. Non-disruptive sleep refers to the process's sleep state, except waiting events, // no other events, such as signals). This situation is not often used. {Context switch; // The execution Context of the process is saved, and the kernel switches to another process. // The context switch is performed elsewhere, and the kernel selects the context for execution, in this case, the process is awakened and restored to the processor level to allow interruption. 0 is returned;} // The sleep state interrupted by the signal if (no undelivered signal) {context switch; if (no signal not delivered) {restores the processor level to allow interruption; returns 0;} // There are undelivered signals if the process is still waiting for the hash queue, remove it from the queue; restore the processor level to allow interruption; if (the process captures the signal) returns 1; execute the longjmp algorithm; // I do not understand this section}
     

Void wakeup (address event ))

     
      
{Disable all interrupts; locate sleep process queues Based on address events); for (each sleep process on the event) {remove the process from the hash queue; set the status to ready; put the process into the scheduling linked list; clear the sleep address event in the progress table); if (the process is not in memory) {wake up the swapper process ;} else if (a wake-up process is more suitable for running) {set the scheduling flag;} resume interruption ;}
     

After a wake-up call, the wake-up process is not immediately put into operation, but is suitable for running. It will not run until the next scheduling of the process ).

Sample Code:

The source code of UNIX cannot be found, so it is replaced by the source code of Linux. The above pseudo code is simple to handle. Linux 0.01 is simpler.

     
      
// Implementation of Linux 0.01: // The void sleep_on (struct task_struct ** p) {struct task_struct * tmp; if (! P) return; if (current ==& (init_task.task) // the current macro is used to obtain the task_struct panic ("task [0] trying to sleep") of the currently running process "); tmp = * p; // Save the sleeping process p2 to tmp * p = current; // only one of the processes whose p1 is saved to the sleeping process queue is actually saved) current-> state = TASK_UNINTERRUPTIBLE; // set the p1 status to an uninterrupted sleep schedule (); // context switch, run other processes // After p1 is awakened, return to if (tmp) tmp-> state = 0; // set the p2 status to run, waiting for scheduling} // The void interruptible_sleep_on (struct task_struct ** p) {struct task_struct * Tmp; if (! P) return; if (current = & (init_task.task) panic ("task [0] trying to sleep"); tmp = * p; // The waiting process p2 is saved to tmp * p = current; // the current process p1 is saved to the sleep process queue repeat: current-> state = TASK_INTERRUPTIBLE; // set the p1 status to interrupted sleep schedule (); // context switch if (* p & * p! = Current) {// p2 sleep interrupted (** p ). state = 0; // set p1 to the running state goto repeat; // return to repeat and continue to sleep p2} * p = NULL; if (tmp) tmp-> state = 0; // set the p2 status to running, waiting for scheduling}
     

These two functions are hard to understand, mainly in the last two statements. Before schedule (), the context of the current process is switched. However, after switching back, the previously sleeping process is set to the ready state. Before schedule () is executed, the pointers are as follows: Sorry, no image will be pasted ):
     
      ---| p | --- || \/ ----   Step 3   ---------| *p |--------->| current | ----            ---------   |   X   Step 1   |   \/ ----------------   Step 2  -----|  Wait Process  |<--------| tmp | ----------------           -----
     

But after schedule () returns this code, things will be different. Because after step 3, the current process is sleep, And the descriptor of the sleep Process pointed to by tmp is also saved. After schedule () is returned, the executed code is still current, and tmp points to wait process. In this case, the code is set to ready and waits for the next scheduling.

Compared with the first two functions, wake_up is quite simple:

     
      
// The wake-up process is not immediately put into operation, but is suitable for running void wake_up (struct task_struct ** p) {if (p & * p) {(** p ). state = 0; // the status of the process to be awakened is set to ready * p = NULL; // remove the process from the waiting process }}
     

With sleep_on () and wake_up (), you can lock resources, such as hard disk buffering, waiting for buffer availability, and waking up the waiting process ):
     
      
// Lock bhstatic inline void lock_buffer (struct buffer_head * bh) {if (bh-> B _lock) printk ("hd. c: buffer multiply locked \ n "); bh-> B _lock = 1;} static inline void unlock_buffer (struct buffer_head * bh) {if (! Bh-> B _lock) printk ("hd. c: free buffer being unlocked \ n "); bh-> B _lock = 0; wake_up (& bh-> B _wait);} static inline void wait_on_buffer (struct buffer_head * bh) {cli (); // disable interruption while (bh-> B _lock) sleep_on (& bh-> B _wait); sti (); // resume interrupt} // The sleep and wake_up implementations of Linux 0.99.15 support waiting queue): static inline void _ sleep_on (struct wait_queue ** p, int state) {unsigned long flags; struct wait_queue wait = {current, NULL}; if (! P) return; if (current = task [0]) panic ("task [0] trying to sleep"); current-> state = state; add_wait_queue (p, & wait); // Add the current process to the wait queue save_flags (flags); // Save the Interrupt Mask sti (); // block the interrupt schedule (); // context switch remove_wait_queue (p, & wait); // remove the current process restore_flags (flags) from the waiting queue ); // recover Interrupt Mask} void wake_up (struct wait_queue ** q) {struct wait_queue * tmp; struct task_struct * p; if (! Q |! (Tmp = * q) return; do {// wait for the queue to wake up the first process if (p = tmp-> task )! = NULL) {if (p-> state = TASK_UNINTERRUPTIBLE) | (p-> state = TASK_INTERRUPTIBLE) {p-> state = TASK_RUNNING; if (p-> counter> current-> counter) need_resched = 1 ;}} if (! Tmp-> next) {printk ("wait_queue is bad (eip = % 08lx) \ n", (unsigned long *) q) [-1]); printk ("q = % p \ n", q); printk ("* q = % p \ n", * q ); printk ("tmp = % p \ n", tmp); break;} tmp = tmp-> next;} while (tmp! = * Q );}
     

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.