I. Soft interrupts and tasklets
1. Delayed processing of interrupts
In the process of interruption, there are some non-critical tasks that can be deferred for a period of time when needed. For an interrupt service program, a new interrupt should not be generated if it does not end execution, and these deferred tasks can be executed in the case of an open interrupt, thus extracting them from the Interrupt service program to reduce the kernel's response time. Linux supports two types of non pressing, interruptible kernel functions: Deferred functions (including soft interrupts and tasklets) and functions executed through work queues. Soft interrupts and Takslet are closely related, and Tasklet are implemented on top of soft interrupts. But there are also differences:
Soft interrupts are statically allocated, while Tasklet allocation and initialization can take place at run time. Soft interrupts can be run concurrently on multiple CPUs, so the code in the soft interrupt should be reentrant and the data structure must be protected with a spin lock. The same type of tasklet is always executed serially, meaning that the same type of tasklet cannot be run concurrently on two CPUs. However, different types of tasklet can be run concurrently on different CPUs. Because of this feature of Tasklet, functions in Tasklet do not have to be reentrant. In general, there are several things that can be deferred functions:
Initialization: Defines a new, deferred function. Typically this work is done during kernel initialization or when the module is loaded. Activation: marks a delay function as pengding so that it can be executed in the next round of the delay function's schedule. Activation can be performed at any time. Masking: selectively shielding a delay function so that even if it is activated, the kernel does not execute it. Execution: Performs a pending delay function and all other pending of the same type. Execution is done at a specific time. 2. Soft Interrupt Linux uses a limited number of soft interrupts. In most cases tasklets is sufficient, and it is easier to write.
Linux defines the following soft interrupts: Hi_softirq timer_softirq net_tx_softirq NET_RX_SOFTIRQ BLOCK_SOFTIRQ BLOCK_IOPOLL_SOFTIRQ SOFTIRQ SCHED_SOFTIRQ HRTIMER_SOFTIRQ RCU_SOFTIRQ
These values are enumerated variables, starting at 0, with a maximum of Nr_softirqs,nr_softirqs representing the number of soft interrupts currently supported in the system. These enumeration variables also define the priority of the corresponding soft interrupts, and the smaller the value the higher the priority. Soft interrupts are stored in the Softirq_vec array, with each element including a handler pointer and a generic data pointer as a parameter.
However, it should be noted that the priority of soft interrupts simply defines their order of execution without affecting their priority relative to other tasks or how often they are executed. 1.preempt_count
The Preempt_count field is used to track the nesting of kernel preemption and kernel control paths, which are stored in the Thread_info field of each process description parent. It is divided into different bit fields, each of which has different meanings: 0-7: Kernel preemption counter (max value = 255) 8-15: Soft Interrupt counter SOFTIRQ counter (max value = 255). 16-27: Hardware Interrupt counter HARDIRQ counter (max value = 4096) 28:preempt_active mark the first child field records the number of times that the local CPU kernel preemption was explicitly disabled, and the second child field records the extent to which the delay function is disabled. The third child field records the nesting of interrupt handlers on the local CPU. Kernel preemption must be prevented when preemption is not allowed, or when the context is being interrupted. With this field, the kernel only needs to check the field to get the status of the current kernel.
2. Soft interrupt processing
Each CPU contains a 32-bit mask that describes a soft interrupt in the pending state. It is located in the _ _softirq_pending domain of the IRQ_CPUSTAT_T data structure. The kernel periodically checks to see if there are pengding interrupts to process, and if they do, the checks are done in the location of the feature (typically exiting the interrupt processing irq_exit and KSOFTIRQD). In soft interrupt processing, a new soft interrupt may occur when a soft interrupt function is executed. Thus, in order to ensure the low latency of the delay function, the soft interrupt processing function runs until the soft interrupt of all pending is performed or the time has elapsed in the soft interrupt processing (2ms in 3.9.4). If there is still an unhandled soft interrupt after the soft interrupt processing is complete, KSOFTIRQD will process it (each CPU has a KSOFTIRQD kernel thread). Soft interrupt runtime hardware interrupts are turned on, but software outages are blocked locally, with the result that DO_SOFTIRQ can only be entered once per CPU. When processing a soft interrupt, the soft interrupt of the local CPU's pending is saved to the local variable in case of a shutdown interrupt, and the soft interrupt mask of the local CPU is cleared by 0, then the soft interrupt for each pending is opened and the local interrupt is performed, then the soft interrupt function is closed, That is, the local interrupt is open only when the interrupt handler function is invoked (but the local soft interrupt is closed). When the pending is processed, the soft interrupt pending state of the local CPU is re-read, and if there is still pending, it continues processing until the soft interrupt of all pending has been processed or the time has elapsed in the soft interrupt processing.
3. Soft Interrupt Correlation functionvoid Open_softirq (int nr, void (*action) (struct softirq_action *))
This function is used to handle the initialization of soft interrupts, the action is a soft interrupt processing function, and NR is a soft interrupt number
void Raise_softirq (unsigned int nr)
This function is used to activate soft interrupts, nr to soft interrupt number
asmlinkage void Do_softirq (void)
This function is used to perform soft interrupt processing functions
2. TaskletsTasklets is the preferred method of implementing a delay function in I/O drive. The tasklets is built on two soft interrupts HI_SOFTIRQ and TASKLET_SOFTIRQ. Multiple tasklets can be associated with a soft interrupt, and each tasklet has its own function. There is no difference between the two types of tasklet, except that the tasklets of the HI_SOFTIRQ is executed before the tasklets of the TASKLET_SOFTIRQ.
representation of the 1.tasklet
Tasklets and high-priority tasklets are stored in the Tasklet_vec and Tasklet_hi_vec arrays respectively. Both arrays contain elements of the Nr_cpus type Tasklet_head, each containing a pointer to a linked list that consists of a Tasklet descriptor. Tasklet Descriptor field: Next: A variable that points to the status count:atomic_t type of the next taskletpointer to next descriptor in the list, if its value is not 0 , it is not executed the next time the Tasklet is executed. It is used to ensure that only one instance of Tasklet on all CPUs is running the Func:tasklet execution function data:tasklet The function's parameters tasklet the state field that describes the parent contains two flags:
Tasklet_state_sched: When the tag is set, it means that the tasklet is in the pending state Tasklet_state_run: The flag is set to indicate that the Tasklet is performing 2.tasklet execution
The
Tasklet is a special function that is scheduled in a soft interrupt context. It may be scheduled to run multiple times, but the Tasklet schedule does not accumulate, which means that it runs only once, even though it has been requested more than once to execute the Tasklet before Tasklet is executed. There will not be multiple instances of the same tasklet running concurrently. But Tasklet can run in parallel with other tasklet on the SMP system. Therefore, if multiple tasklet use the same resources, they must take some kind of lock to avoid conflicting with each other. Each Tasklet activation runs only once, unless the tasklet itself is reactivated. In the HI_SOFTIRQ and TASKLET_SOFTIRQ processing functions, the Tasklet_vec or Tasklet_hi_vec elements associated with the current CPU are saved to a local variable, and the element itself is set to null (in case of a shutdown of a local interrupt). It then traverses each of the elements in the list to check whether it is already running (on other CPUs), whether it is a prohibited state, and if it is not, and Tasklet is activated, Tasklet will be run.
3. Tasklet related APIs
The corresponding header file for the API is Linux/interrupt.h
Declare_tasklet (Name, func, data); Declare and define a Tasklet
Declare_tasklet_disabled (Name, func, data); declares and defines a tasklet whose initial state is prohibited
void Tasklet_init (struct tasklet_struct *t, void (*func) (unsigned long), unsigned long data);
In order to use Tasklet, you must first assign a TASKLET_STRUCT data structure and use Tasklet_init to initialize it.
static inline void tasklet_disable (struct tasklet_struct *t)
It is used to prohibit Tasklet
static inline void tasklet_enable (struct tasklet_struct *t)
It is used to enable Tasklet
static inline void Tasklet_schedule (struct tasklet_struct *t)
static inline void Tasklet_hi_schedule (struct tasklet_struct *t)
They are used to activate Tasklet, a tasklet associated with a high priority soft interrupt, and a tasklet associated with a normal soft interrupt.
void Tasklet_kill (struct tasklet_struct *t); Used to ensure that the Tasklet is no longer scheduled for execution, usually when the device is to be closed or the module is to exit. If Tasklet is being scheduled to execute, the function waits for its execution to complete before starting its own action. If the Tasklet will reschedule itself, some judgment should be made when rescheduling to prevent it from ever being killed. The code for Tasklet_kill is as follows:
void Tasklet_kill (struct tasklet_struct *t)
{
if (In_interrupt ())
PRINTK ("attempt to K Ill tasklet from interrupt\n ");
while (Test_and_set_bit (tasklet_state_sched, &t->state)) {do
{
Yie LD ();
while (Test_bit (tasklet_state_sched, &t->state));
}
tasklet_unlock_wait (t);
Clear_bit (tasklet_state_sched, &t->state);
}
The core code for Tasklet execution is as follows:
if (Tasklet_trylock (t)) {
if (!atomic_read (&t->count)) {if (
!test_and_clear_bit) (tasklet_state_sched , &t->state))
BUG ();
Trace_irq_tasklet_low_entry (t);
T->func (t->data);
Trace_irq_tasklet_low_exit (t);
Tasklet_unlock (t);
Continue;
}
Tasklet_unlock (t);
}
Second, Workqueue
1. Concept of Work QueuesWork queues are used to replace task queues. They all allow kernel functions to be activated and then executed by special kernel threads called worker threads.
Work queues and delay functions are very similar, but they are also different:
The delay function runs in the break context, the work queue runs in the process context work queue can block, but the delay function cannot, because blocking functions can be performed only when the process context is running (the interrupt context cannot happen process switching) is similar to a delay function. Functions in the work queue also do not have access to the user address space of the process. And because work queues are executed by kernel threads, there is no user-state address space.
When working with a work queue, you need to define a work queue, and then insert functions that require deferred execution into the work queue. Each worker thread loops within the Worker_thread function, most of the time the thread is asleep and waits for some work to be inserted into the queue. Once the worker thread is awakened, it calls the Run_workqueue () function, which removes all work_struct from the worker thread's work queue list and executes the corresponding function. Worker threads can be blocked and can sleep.
The system has predefined work queues that can be used in most cases directly:
Standard Work queue functions equivalent to predefined work queue functions
Schedule_work (W) queue_work (keventd_wq,w)
Schedule_delayed_work (W,d) queue_delayed_work (on any CPU)
schedule_delayed_work_on (cpu,w,d) queue_delayed_work (Keventd_wq,w,d) (on a given CPU)
Flush_scheduled_work () flush_workqueue (KEVENTD_WQ)
However, it should be noted that if your function is blocked for a long time, do not use a predefined work queue, as this can affect other functions that use a predefined queue.
From an execution point of view, functions in Workqueue are called by kernel threads.
The data structure workqueue_struct is used to represent work queues, work_struct to define tasks that need to be performed by the work queue (the thread). 2. Related API
API corresponding header file is linux/workqueue.h 1. Worker Queue API
Create_workqueue (name)
This is actually a macro that returns the description parent address of the newly created work queue. The function also creates n worker threads (n is the number of CPUs in the system) and names the worker threads according to name.
Create_singlethread_workqueue (name)
This is also a macro that completes the same work as Create_workqueue, but only creates a worker thread.
void Destroy_workqueue (struct workqueue_struct *wq);
This function is used to destroy the worker queue, which is a pointer to the worker queue data structure.
2. Work-related APIsDeclare_work (name, void (*function) (void *), void *data);
Init_work (struct work_struct *work, void (*function) (void *), void *data);
Prepare_work (struct work_struct *work, void (*function) (void *), void *data); The above three APIs are used to initialize the work structure
int queue_work (struct workqueue_struct *wq, struct work_struct);
This function is used to insert a function into a work queue, Wq as a Task force column data structure pointer, work as "work."
int queue_work_on (int cpu, struct workqueue_struct *wq, struct work_struct);
Similar to Queue_work, but specifies that the function is executed by that CPU.
int queue_delayed_work (struct workqueue_struct *wq, struct delayed_work *work, unsigned long delay);
The function inserts a function into the worker queue after a specified time
int queue_delayed_work_on (int cpu, struct workqueue_struct *wq, struct delayed_work *work, unsigned long delay);
Similar to queue_delayed_work, but specifies which CPU the function is executed by
void Flush_workqueue (struct workqueue_struct *wq);
This function forces all functions in the work queue that are joined before calling the function to be executed and block until they are all executed