Linux kernel interrupt Subsystem (ix): Tasklet

Source: Internet
Author: User

First, preface

For interrupt handling, Linux divides it into two parts, one called Interrupt handler (top half), something less urgent to handle is deferred, and we call it deferable task, or bottom half. How to defer execution is divided into the following situations:

1, deferred to top half execution completed

2. Deferred to a specified time slice (e.g. 40ms) after execution

3, deferred to a kernel thread is dispatched when the execution

For the first case, the mechanisms in the kernel include the SOFTIRQ mechanism and the tasklet mechanism. The second case is an application scenario (Timer-type SOFTIRQ) that belongs to the SOFTIRQ mechanism, which is described in the series of documentation for the time subsystem in this site. The third case includes the threaded IRQ handler and the generic workqueue mechanism, and of course includes the creation of the driver-specific kernel thread (deprecated). This article mainly describes tasklet this mechanism, the second chapter describes some background knowledge and tasklet thinking, the third chapter combined with code to describe the principle of tasklet.

Note: The version of Linux kernel in this article is 4.0

Second, why need Tasklet?

1, the basic thinking

Does our driver or kernel module really need to be tasklet? Everyone has his own opinion. Let's start with the mechanism in Linux kernel, and start with a logical thought.

Dividing interrupt processing into top half (interactions between CPUs and peripherals, acquiring status, ACK status, sending and receiving data, etc.) and bottom half (data processing in the latter segment) have been popular, and for any OS, the less urgent thing has been postponed to bottom Execution in half is OK, and there are two types of deferred execution: time-specific (corresponding to low-precision timer and high-precision timer in Linux kernel) and no time-specific requirements. There are two types of non-specific time requirements:

(1) The faster the better type, this actually has performance requirements, in addition to the interrupt top half can preempt its execution, the other process context (regardless of how high the priority of the process) will not affect its execution, word, without affecting the interrupt delay, the OS will be processed as soon as possible.

(2) The type of the model. This belongs to the kind of scheduler that does not have performance requirements, and its dispatch executes dependent on the system.

In essence, the faster the better type of bottom half should not be too much, and Tasklet callback function can not be performed too long, otherwise it will result in a process scheduling delay too large, even very long and uncertain delay, the real time system will have a bad impact.

2. Thinking about the mechanism of bottom half in Linux

In the Linux kernel, there are two types of "faster and better", SOFTIRQ and Tasklet, and there are two types of "go-lucky", Workqueue and threaded IRQ handler. "The faster the better type" can only leave a SOFTIRQ? For programmers who advocate simplicity, beauty, of course, hope so. To answer this question, let's take a look at Tasklet's benefits for SOFTIRQ:

(1) Tasklet can be dynamically assigned, or can be statically assigned, the number is unlimited.

(2) The same tasklet is not executed in parallel on multiple CPUs, which makes it easier for programmers to write tasklet functions, reducing concurrency considerations (of course, loss of performance).

The first benefit, in fact, is to open the door for disorderly use of Tasklet, many write-driven software engineers will not carefully consider whether their driver have performance requirements to directly use the Tasklet mechanism. For the second benefit, it is the responsibility of the software engineer to consider concurrency itself. Therefore, it seems that Tasklet does not introduce special benefits, and like SOFTIRQ, can not sleep, limiting the convenience of handler writing, there seems to be no need to exist. In 4.0 kernel's code, grep tasklet use, is actually a very long list, as long as the use of the simple collation can be removed for the use of Tasklet. For those with performance requirements, you can consider incorporating SOFTIRQ, others may consider using Workqueue instead. Steven Rostedt tried to do this (http://lwn.net/Articles/239484/), but the patch never entered main line.

Three, the basic principle of tasklet

1, how to abstract a Tasklet

The kernel uses the following data structure to represent Tasklet:

struct TASKLET_STRUCT
{
struct Tasklet_struct *next;
unsigned long state;
atomic_t count;
void (*func) (unsigned long);
unsigned long data;
};

Each CPU maintains a list of tasklet that the CPU needs to handle, and the next member points to the next tasklet in the list. The Func and data members describe the callback function of the Tasklet, Func is the calling function, and data is the argument passed to Func. The State member represents the status of the Tasklet, tasklet_state_sched represents the Tasklet and is scheduled to execute on a CPU, and Tasklet_state_run indicates that the tasklet is executing on a CPU. The count member is related to the enable or disable state of the Tasklet, and if Count equals 0 then the Tasklet is in enable, if greater than 0, indicating that the Tasklet is disable. In the SOFTIRQ documentation, we know that the local_bh_disable/enable function is used to disable/enable bottom half, which includes SOFTIRQ and Tasklet. However, sometimes the kernel synchronization scenario does not need to disable all the SOFTIRQ and Tasklet, but only disable the Tasklet, at this time, tasklet_disable and tasklet_enable will come in handy.

static inline void tasklet_disable (struct tasklet_struct *t)
{
Tasklet_disable_nosync (t);--give tasklet count plus a
Tasklet_unlock_wait (t);---if the tasklet is in running state, you need to wait until the Tasklet executes
SMP_MB ();
}

static inline void tasklet_enable (struct tasklet_struct *t)
{
Smp_mb__before_atomic ();
Atomic_dec (&t->count);--minus one for the count of Tasklet.
}

Tasklet_disable and tasklet_enable support nesting, but need to be used in pairs.

2. How does the system manage Tasklet?

Each CPU in the system maintains a tasklet linked list, which is defined as follows:

Static define_per_cpu (struct tasklet_head, Tasklet_vec);
Static define_per_cpu (struct tasklet_head, Tasklet_hi_vec);

In Linux kernel, there are two tasklet-related SOFTIRQ, HI_SOFTIRQ for high-priority TASKLET,TASKLET_SOFTIRQ for normal tasklet. For SOFTIRQ, the priority is the order that appears in the SOFTIRQ pending register (__softirq_pending), where bit 0 has the highest priority, that is, if there are multiple different types of SOFTIRQ triggering simultaneously, Then the order of execution depends on the position of the SOFTIRQ pending register, kernel always from right to left to determine whether to set the position, if the place is executed. HI_SOFTIRQ occupies bit 0, with a priority even higher than a timer, which needs to be used with caution (in fact, I grep the kernel code and it doesn't seem to find the use of HI_SOFTIRQ). Of course HI_SOFTIRQ and TASKLET_SOFTIRQ mechanism is the same, so this article only discusses TASKLET_SOFTIRQ, we can extrapolate.

3, how to define a Tasklet?

You can statically define Tasklet with the following macro definition:

#define Declare_tasklet (Name, func, data) \
struct Tasklet_struct name = {NULL, 0, atomic_init (0), Func, data}

#define DECLARE_TASKLET_DISABLED (Name, func, data) \
struct Tasklet_struct name = {NULL, 0, Atomic_init (1), Func, data}

Both macros can statically define a struct tasklet_struct variable, except that the initialized Tasklet one is in the eable state and one is in the disable state. Of course, you can also dynamically assign Tasklet, and then call Tasklet_init to initialize the Tasklet.

4, how to dispatch a Tasklet

In order to dispatch a tasklet execution, we can use the Tasklet_schedule interface:

static inline void Tasklet_schedule (struct tasklet_struct *t)
{
if (!test_and_set_bit (tasklet_state_sched, &t->state))
__tasklet_schedule (t);
}

The program can dispatch the same Tasklet execution (or possibly from multiple CPU cores) multiple times in several contexts, but in practice the Tasklet only hangs at once on the Tasklet linked list of the CPU that was first dispatched, that is, even if multiple calls tasklet_ Schedule, in fact, the Tasklet only hangs in a tasklet queue of a specified CPU (and only once), meaning that only one execution is scheduled. This is done by tasklet_state_sched this flag, and we can use the following image to describe:

We assume that the drive of the HW block a uses the tasklet mechanism and that in the interrupt handler (top half) the statically defined Tasklet (this tasklet is shared by each CPU, not per CPU) is scheduled to execute (that is, call tasklet_ Schedule function). When HW block a detects the action of the hardware (for example, the data in the receiving FIFO is half full) it triggers the level or edge signal on the IRQ line, and GIC detects that the signal will distribute the interrupt to a CPU to perform its top half handler, and we assume this is cpu0, Therefore the tasklet of the driver is hung into the CPU0 corresponding Tasklet list (TASKLET_VEC) and the state is set to tasklet_state_sched. HW block A's drive in the Tasklet has been dispatched, but not executed, if this time, the hardware again triggered interrupt and execution on the CPU1, although the Tasklet_schedule function is called again, but because tasklet_state_sched has been set , this tasklet in the drive of HW block A is not attached to the CPU1 tasklet linked list.

Let's take a closer look at the underlying __tasklet_schedule function:

void __tasklet_schedule (struct tasklet_struct *t)
{
unsigned long flags;

Local_irq_save (flags);-------------------(1)
T->next = null;---------------------(2)
*__this_cpu_read (tasklet_vec.tail) = t;
__this_cpu_write (Tasklet_vec.tail, & (T->next));
Raise_softirq_irqoff (TASKLET_SOFTIRQ);----------(3)
Local_irq_restore (flags);
}

(1) The following list operation is PER-CPU, so there is no local interrupt to intercept all concurrency.

(2) Here the three lines of code is to hang a tasklet in the tail of the list

(3) Raise TASKLET_SOFTIRQ type of SOFTIRQ.

5. At what time will tasklet be executed?

The above describes the dispatch of the Tasklet, of course, scheduling tasklet is not equal to the execution of Tasklet, the system will execute Tasklet callback function at the appropriate point in time. Since Tasklet is based on SOFTIRQ, let's first summarize the SOFTIRQ execution scenario:

(1) When the interrupt returns to the user space (the process context), if there is a pending SOFTIRQ, then the handler function of the SOFTIRQ is executed. This limits the interrupt return user space which means that the SOFTIRQ of the following two scenarios is triggered to execute:

(a) Interrupt return to hard interrupt context, i.e. interrupt nested scene

(b) Interrupt return to software interrupt context, which is the scene that interrupts the soft interrupt contexts

(2) The above description is missing a scenario where a scenario is interrupted that returns the process context of the kernel state, which we need to elaborate. When calling Local_bh_enable in the context of a process, if there is a pending SOFTIRQ, then the handler for that SOFTIRQ will be executed. Due to kernel synchronization requirements, local_bh_enable/disable may be called in the context of a process to protect critical sections. During the execution of the critical section code, the interrupt will come at any time to preempt the execution of the process (kernel state) (note: This is only disable the bottom half, no interrupt is forbidden). In this case, will the SOFTIRQ handler be executed when the interrupt is returned? Of course not, we disable the execution of bottom half, which means that SOFTIRQ handler cannot be executed, but essentially bottom half should have a higher priority than the process context, and once the conditions permit, the execution of the process context is immediately preempted, so When you leave the critical section immediately, when calling Local_bh_enable, the SOFTIRQ pending is checked, and if bottom half is enabled, pending SOFTIRQ handler will be executed.

(3) The system is too busy, but the resulting interrupt, raise SOFTIRQ, due to the high priority of bottom half, resulting in the process can not schedule execution. In this case, SOFTIRQ will be deferred to softirqd this kernel thread to execute.

For the TASKLET_SOFTIRQ type of SOFTIRQ, whose handler is tasklet_action, let's see how each tasklet is executed:

static void Tasklet_action (struct softirq_action *a)
{
struct Tasklet_struct *list;

Local_irq_disable ();--------------------------(1)
List = __this_cpu_read (Tasklet_vec.head);
__this_cpu_write (Tasklet_vec.head, NULL);
__this_cpu_write (Tasklet_vec.tail, This_cpu_ptr (&tasklet_vec.head));
Local_irq_enable ();

while (list) {---------traversal tasklet linked list
struct Tasklet_struct *t = list;

List = list->next;

if (Tasklet_trylock (t)) {-----------------------(2)
if (!atomic_read (&t->count)) {------------------(3)
if (!test_and_clear_bit (tasklet_state_sched, &t->state))
BUG ();
T->func (T->data);
Tasklet_unlock (t);
continue;-----processing the next Tasklet
}
Tasklet_unlock (t);----clear Tasklet_state_run Mark
}

Local_irq_disable ();-----------------------(4)
T->next = NULL;
*__this_cpu_read (tasklet_vec.tail) = t;
__this_cpu_write (Tasklet_vec.tail, & (T->next));
__raise_softirq_irqoff (TASKLET_SOFTIRQ); ---Trigger SOFTIRQ again, waiting for the next execution time
Local_irq_enable ();
}
}

(1) Remove all Tasklet from the tasklet linked list of this CPU, save in the temporary variable of list, and reinitialize the Tasklet linked list of this CPU, so that the list is empty. Because bottom half is on interrupt execution, it is necessary to use off interrupt protection when operating the Tasklet linked list

(2) Tasklet_trylock is mainly used to set the state of the Tasklet to Tasklet_state_run, and to determine whether the Tasklet is already in the execution state, this status is very important, it determines the subsequent code logic.

static inline int tasklet_trylock (struct tasklet_struct *t)
{
Return!test_and_set_bit (Tasklet_state_run, & (t)->state);
}

You may wonder: why is it that the Tasklet list that this CPU is going to handle is removed from the Tasklet list, and Tasklet in this list is already in running state? Will, we go back to the above hardware and software structure diagram. Similarly, the drive of the HW block A uses the Tasklet mechanism and executes the statically defined Tasklet dispatch in the interrupt handler (top half). HW Block A hardware interrupt is first delivered to cpu0 processing, so the tasklet of the driver is hung into the CPU0 corresponding Tasklet linked list and the Tasklet is executed at the appropriate point in time. At this time, cpu0 hardware interrupt came again, the driver Tasklet callback function was preempted, although Tasklet is still in running state. At the same time, the HW block a hardware again triggers an interrupt and executes on CPU1, when the driver Tasklet is in running state, and tasklet_state_sched has been cleared, so the call tasklet_ The schedule function will cause the tasklet of the driver to be attached to the Tasklet list of the cpu1. Since cpu0 is dealing with other hardware interrupts, the CPU1 Tasklet is then sent first to the Tasklet_action function call, at which point the HW block is removed from CPU1 's tasklet chain list that needs to be processed. A corresponding Tasklet is actually in the execution state on the cpu0.

When we design tasklet, we stipulate that the same type of tasklet can only be executed on one CPU, so tasklet_trylock is the role.

(3) Check if the Tasklet is enabled and, if so, indicate that the tasklet can actually enter the execution state. The main action is to clear the tasklet_state_sched state and execute the Tasklet callback function.

(4) If the Tasklet has already been executed on another CPU, then we hang it in the tail of the Tasklet linked list of the CPU, so that kernel will try to execute the Tasklet again at this point in time when the next Tasklet execution time comes, Perhaps the tasklet on the other CPUs has been executed. This code logic ensures that a particular tasklet is executed on only one CPU and not concurrently on multiple CPUs.

Original articles, forwarded please indicate the source. Snail Nest Technology

Linux kernel interrupt Subsystem (ix): Tasklet

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.