Kernel preemption Implementation (preempt) "Go"

Source: Internet
Author: User

Transferred from: http://blog.chinaunix.net/uid-12461657-id-3353217.html

first, what is called preemption
The so-called preemption, plainly speaking is the process switch.
Linux user space, process a in execution, come (hard? Interrupt-Interrupt A, when returned from the interrupt handler, executes process B if there is a higher priority process B in the queue. Process is always preempted under user space


In Linux kernel space is not necessarily, Linux 2.4 is not preemptive, real-time will be reduced, as follows:


Second, the preemption of the API

Preempt_enable () Open preemption
Preempt_disable () Prohibit preemption

There is a counter in each process data structure in the kernel Preempt_count
Preemption opening and suppression, manipulating the preempt_count of the current process
When the kernel is in process scheduling, as long as the Prempt_count is 0, the kernel can be preempted.
struct Thread_info {
struct Task_struct *task; /* Main task structure */
...//omitted from the.
int CPU; /* CPU we ' re on */
int preempt_count; /* 0 = preemptable, <0 = BUG * /
};

#definepreempt_enable () \
do {\
Preempt_enable_no_resched (); \
Barrier (); \
Preempt_check_resched (); \
} while (0)

#definepreempt_disable () \
do {\
Inc_preempt_count (); \
Barrier (); \
} while (0)

#definepreempt_enable_no_resched () \
do {\
Barrier (); \
Dec_preempt_count (); \
} while (0)

#define INC_PREEMPT_COUNT () Add_preempt_count (1)
#define DEC_PREEMPT_COUNT () Sub_preempt_count (1)
#define Add_preempt_count (val) do {preempt_count () + = (val);} while (0)
#define Sub_preempt_count (val) do {preempt_count ()-= (val);} while (0)
#define PREEMPT_COUNT () (Current_thread_info ()->preempt_count)

third, the timing of the occurrence of preemption
The core function of Linux process scheduling isSchedule (), process scheduling is done here.
Schedule calls are divided into active calls and passive invocations.
The active call means that the kernel shows a direct call to Shedule (), as the current process calls the dormant function, which invokes schedule
A passive call is called by the corresponding callback function after the system call, interrupt processing, or exception processing is finished schedule
Determines whether the current process can be preempted before calling schedule ()

Just look at the case of schedule passive call when the interrupt returns
As for the active invocation of the place is too much, what process ended, pause and so on, no patience to see ...

3.1 When returning from an interrupt

The first is from the interrupt handlerDo_irq ()When it returns, it calls theret_from_except ()(see "PowerPC Interruption related knowledge")
Ret_from_except () to check first, to determine whether the previously interrupted execution is running in user space or kernel space,
When deciding to return to user space or kernel space

User space: (now know why the user space program is always preemptive)
Ret_from_except
-User_exc_return
-Do_work
--Call do_signal and schedule

Kernel space: (To open a preemption option when compiling the kernel)
Ret_form_except
-Resume_kernel
-PREEMPT_SCHEDULE_IRQ
-Schedule

. Globl ret_from_except
Ret_from_except:
Load_msr_kernel (R10,msr_kernel)//Set Msr_kernel constant to MSR to prohibit external interrupts
SYNC//some chip revs has problems here ...
MTMSRD (R10)//disable interrupts

Lwz R3,_MSR (R1)//msr[pr],returning to user mode in the read stack?
Andi. R0,r3,msr_pr
BEQ Resume_kernel

User_exc_return://r10 contains msr_kernel here
RLWINM r9,r1,0,0, (31-thread_shift)//check current_thread_info ()->flags
Lwz R9,ti_flags (R9)
Andi. R0,R9, (_tif_sigpending|_tif_restore_sigmask|_tif_need_resched)
BNE Do_work
Restore_user:

#ifdef CONFIG_PREEMPT
b Restore

Resume_kernel:
RLWINM r9,r1,0,0, (31-thread_shift)/* Check Current_thread_info->preempt_count */
Lwz r0,ti_preempt (R9)
CMPWI 0,r0,0/* If non-zero, just restore regs and return */
BNE Restore
Lwz R0,ti_flags (R9)
Andi. R0,r0,_tif_need_resched
beq+ Restore
Andi. R0,r3,msr_ee/* interrupts off? */
BEQ Restore/* don ' t schedule if so */
1:BL PREEMPT_SCHEDULE_IRQ
RLWINM r9,r1,0,0, (31-thread_shift)
Lwz R3,ti_flags (R9)
Andi. R0,r3,_tif_need_resched
bne-1b
#else
Resume_kernel:
#endif/* Config_preempt */
////////////////////////////////////////////////////////////////////////////////////
Do_work:/* R10 contains msr_kernel here * *
Andi. R0,r9,_tif_need_resched
BEQ do_user_signal

Do_resched:/* R10 contains msr_kernel here * *
Ori R10,r10,msr_ee
SYNC
MTMSRD (R10)/* hard-enable interrupts */
BL Schedule
Recheck:
Load_msr_kernel (R10,msr_kernel)
SYNC
MTMSRD (R10)/* Disable interrupts */
RLWINM r9,r1,0,0, (31-thread_shift)
Lwz R9,ti_flags (R9)
Andi. R0,r9,_tif_need_resched
Bne-do_reschedandi. R0,r9,_tif_sigpending
BEQ Restore_user
Do_user_signal:/* R10 contains msr_kernel here * *


asmlinkage void __sched Preempt_schedule_irq (void) {
struct Thread_info *ti = Current_thread_info ();
bug_on (Ti->preempt_count | |!irqs_disabled ());
do {
Add_preempt_count (preempt_active);
Local_irq_enable ();
Schedule ();
Local_irq_disable ();
Sub_preempt_count (preempt_active);
Barrier ();
} while (Unlikely (Test_thread_flag (tif_need_resched)));
}

asmlinkage void __sched preempt_schedule (void) {
struct Thread_info *ti = Current_thread_info ();
//preempt_cout not 0, you don't call schedule.
if (Likely (Ti->preempt_count|| Irqs_disabled ()))
Return

do {
Add_preempt_count (preempt_active);
Schedule ();
Sub_preempt_count (preempt_active);
Barrier ();
} while (Unlikely (Test_thread_flag (tif_need_resched)));
}
#########################################################################################;
The main execution paths in the kernel are:
1 The kernel state of the user process, at this time there is a process context, mainly on behalf of the process in the execution of system calls.
Also includes, the kernel of its own processes, such as KSOFTIRQD and so on
2 interrupts or anomalies or self-trapping, conceptually, there is no process context at this time, the context switch cannot be carried out.
3 Bottom_half, conceptually speaking, there is no process context at this time.
4 at the same time, the same execution path may also run on other CPUs.

This is the reason why the preempt_enable/disable in the network code in Linux2.6 is moved to the SOFTIRQD call.
Part SOFTIRQ is called after ISR processing,
For this part of the code, because it is running in the bottom-half processing, it must be returned before the process system call is run.
So actually preempt_disable (); Preempt_enable (); The code is meaningless to them.
Part SOFTIRQ is run on the kernel thread of KSOFTIRQD,
Because this is equivalent to running in the kernel space of the process, the soft interrupt is the continuation of the upper half of the interrupt,
So all this work needs to be done as soon as possible. So when the SOFTIRQD run, the preempt is banned,
This guarantees that the next process will not be dispatched until the SOFTIRQ is finished, because all functions inside the SOFTIRQ do not sleep.

Kernel preemption Implementation (preempt) "Go"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.