The tackle scheduling, which is the support of the scheduling mechanism to the real-time system needs, is a fast response rescheduling mechanism. Since it is related to the rescheduling, we should review the dispatch and rescheduling.
There are two types of scheduling,
1. One is the voluntary dispatch, the code actively calls schedule to allow the CPU, such as Sleep,mutex,sema blocking wait. In addition, when a process (or thread) comes to an end (Do_exit), it also calls schedule, but never resumes execution again, otherwise it is a bug.
2. The other is rescheduling, involuntary. The current thread is set to reschedule when there is an interrupt occurring or when the system is called, in entry. The schedule is called passively on the return path of S. While the steals the dispatch, also is the rescheduling, but does not have to wait for the interruption to occur or the system calls, after processing ends leaves entry. S to dispatch. Instead, it is dispatched immediately after leaving the critical area of the steals.
In general, before a voluntary dispatch, the current thread will hook its own task_struct on a waiting queue, such as wait_queue_t. and set its own state to interrputable or uninterrutable (allowing the signal to wake up or not), and finally call schedule. In this case, the execution of schedule will remove the current Task_struct from the running queue where it resides.
The schedule is not to remove the current Task_struct from the running queue where it is scheduled to be re-dispatched or all steals.
The re-Dispatch is divided into two steps:
1. Set the current running thread to tif_need_resched and the current thread may be in the critical section of the steals protection. Usually the target thread of the wakeup is on the current CPU, then the settings can be set directly to reschedule.
2. A. When leaving the critical section of the steals protection, immediately check whether the need to reschedule, whether to dispatch immediately.
B. The end of interrupt processing, the end of soft interrupt processing, or the end of the system call, before reverting to the original code, re-dispatched.
To put it simply:
Call Resched_curr at some point to re-dispatch the queue of the scheduling policy where the current thread is located, then the token needs to be re-dispatched, but does not make a real call.
When you want to switch the context, such as the end of the interrupt processing, the system call ends, then call schedule for the actual scheduling work.
or directly from the critical section of the steals protection, call Preempt_schedule immediately to dispatch.
Our well-known wakeup and time-slice rotation scheduling is actually called Resched_curr.
Resched_curr the running queue of a scheduling policy for the current CPU
There are three steps to work:
1. Whether the current thread has been set to reschedule
2. Is the CPU that the running queue of the target of the rescheduling is also currently running?
If yes, take the following two steps:
A. Set the reschedule flag tif_need_resched
B. Reschedule the preempt_need_resched to the current CPU's preempt_count settings.
3. Send the rescheduling IPI (Inter-processor interrupt) to the remote CPU.
Steals protection "lock", is a protection critical area is not being re-dispatched demand and steals to dispatch. This lock independently protects each CPU (the currently running thread), rather than protecting the thread concurrency critical section. It is recursive, has no competitive state, does not block, and does not exist race condition. Each CPU has a steals protection lock. This lock is just a 32-bit named __preempt_count shaping variable. It provides nesting depth for interrupts, soft interrupts, common steals protection, while also used as soft interrupt lock, to protect soft interrupt handling without recursive entry. Interrupt processing, soft interrupt processing is also a critical area of steals protection, not allowed to be re-dispatched steals.
Steals protection "lock", in addition to protect the critical area is not re-dispatch inference, but also for the rapid response rescheduling provides the time to respond, that is the steals critical area when the time to leave. Because entry. The response to rescheduling on S is often passive and must wait for an indeterminate event, such as an interrupt occurrence or system call, to entry the kernel. S and will not respond to rescheduling when it leaves. And the steals the dispatch is the code when leaves the critical area the initiative to respond the rescheduling. Without the steals protection lock __preempt_count, the code is unclear when it is appropriate to respond to rescheduling. Had to be passively postponed to after entry. S, thus not responding to rescheduling as quickly as possible.
Tackle scheduling, performed by the function Preempt_schedule, must be actively invoked. This function is embedded in every code that leaves the critical area of the steals, such as the kernel's ubiquitous sync lock release, Spin_unlock, Rwlock_unlock, Rcu_read_unlock, Spin_trylock, and Pagefault_ Enable and so on. These functions will be embedded in the preempt_enable.
In the preemption schedule, the CPU's __preempt_count is set preempt_active to prompt __schedule (), we come in from preempt path. Preempt_active is used to protect the heavy dispatch of non-nested steals. The currently steals thread will not be able to tackle the other threads that are running on the current CPU until it resumes running.
It must be clear here that the software interrupt protection lock is also in use steals the protection lock, when leaving the soft interrupt processing, also is equivalent to leave a level steals the critical area, at this time as long as there is no nested critical section, you can do steals debugging. __local_bh_enable_ip embedded in the preempt_check_enable.
Linux kernel preempt preemption scheduling, Preempt_count preemption protection "lock"