Linux Soft Interrupt Summary

Source: Internet
Author: User

I. Soft Interrupt Overview
  
Soft Interrupt uses the concept of hardware interrupt to simulate software to achieve macro asynchronous execution. In many cases, the soft interrupt is similar to the "signal". At the same time, the soft interrupt is opposite to the hard interrupt. "The Hard interrupt is the interrupt of the external device to the CPU ", "Soft Interrupt is usually the interrupt of the hard interrupt service program to the kernel", "the signal is from the kernel (or other processes) interruption to a process "(chapter 3 of Linux kernel source code scenario analysis ). A typical application of Soft Interrupt is the so-called "bottom half" (bottom half). Its name comes from the mechanism of separating hardware interrupt processing into two phases: "top half" and "bottom half: the upper half runs in the context of blocking the interruption and is used to complete the key action. The lower half is not very urgent and usually time-consuming, therefore, the system arranges the running time on its own and does not execute it in the context of the interrupted service. The bottom half application is also the reason for motivating the kernel to develop the current Soft Interrupt mechanism. Therefore, we should first start with the implementation of bottom half.
  
Ii. bottom half
  
In the Linux kernel, bottom half usually uses "bh" to indicate that it is initially used to complete non-critical time-consuming actions for service interruption in lower-privilege context, now it is also used for all asynchronous actions that can be performed in lower-priority context. The earliest bottom half implementation was to borrow the interrupt vector table. In the current 2.4.x kernel, we can still see that:
  
Static void (* bh_base [32]) (void);/* kernel/softirq. C */
The system defines an array of function pointers. A total of 32 function pointers are accessed using an array index, which corresponds to a set of functions:
  
Void init_bh (int nr, void (* routine) (void ));
Assign a value to the NR function pointer to routine.
  
Void remove_bh (INT nr );
Opposite to init_bh (), remove the NR function pointer.
  
Void mark_bh (INT nr );
Indicates that the second bottom half is executable.
  
Due to historical reasons, the pointer positions of various bh_base functions have predefined meanings. In the v2.4.2 kernel, there is such an enumeration:
  
Enum {
Timer_bh = 0,
Tqueue_bh,
Digi_bh,
Serial_bh,
Riscom8_bh,
Specialix_bh,
Export ra_bh,
Esp_bh,
Scsi_bh,
Immediate_bh,
Cyclades_bh,
Cm206_bh,
Js_bh,
Macserial_bh,
Isicom_bh
};
  
And specify a driver to use a bottom half location. For example, if the serial port is interrupted, we agree to use serial_bh. Now we use timer_bh, tqueue_bh, and immediate_bh, but their semantics is quite different, because the use of the entire bottom half is already quite different, the three functions only maintain downward compatibility on the interface, and the implementation of these functions is always changing with the soft interrupt mechanism of the kernel. Now, in the 2.4.x kernel, it uses the tasklet mechanism.
  
Iii. task queue
  
Before introducing tasklet, it is necessary to look at the earlier task queue mechanism. Obviously, the original bottom half mechanism has several major limitations. The most important one is that the number is limited to 32. As the number of system hardware increases, the application scope of Soft Interrupt is growing, this number is obviously not enough, and each bottom half can only mount one function, which is not enough. Therefore, in the 2.0.x kernel, task queue (task queue) has been expanded. The implementation in 2.4.2 is used here.
  
Task queue is built on the data structure of the system queue. The following is the data structure of task queue, which is defined in include/Linux/tqueue. h:
  
Struct tq_struct {
Struct list_head list;/* linked list structure */
Unsigned long sync;/* initially recognized as 0, set the atom to 1 when entering the queue to avoid repeated queues */
Void (* routine) (void *);/* function called during activation */
Void * data;/* routine (data )*/
};
  
Typedef struct list_head task_queue;
  
Perform the following steps during use:
  
Declare_task_queue (my_tqueue);/* defines a my_tqueue, which is actually a list_head queue with tq_struct as the element */
Describe and define a tq_struct variable my_task;
Queue_task (& my_task, & my_tqueue);/* Register my_task to my_tqueue */
Run_task_queue (& my_tqueue);/* Start my_tqueue manually when appropriate */
In most cases, you do not need to call declare_task_queue () to define your task queue, because the system has predefined three task queue:
  
Tq_timer, started by the clock interruption service program;
Tq_immediate, Which is started before the return of the interrupt and in the Schedule () function;
Tq_disk, used internally by the memory management module.
Generally, tq_immediate can be used to complete most asynchronous tasks.
  
The run_task_queue (task_queue * List) function can be used to start all tasks attached to the list. It can be manually called or mounted to the bottom half vector table mentioned above. Using run_task_queue () as the function pointer of bh_base [Nr] actually expands the number of function handles for each bottom half, the predefined tq_timer and tq_immediate of the system are attached to tqueue_bh and immediate_bh respectively (note that timer_bh is not used in this way, but tqueue_bh is also started in do_timer ), this can be used to expand the number of bottom half. In this case, you do not need to manually call run_task_queue () (this is originally not suitable). Instead, you only need to call mark_bh (immediate_bh) so that the bottom half mechanism can schedule it as appropriate.
  
Iv. tasklet
  
From the above, we can see that the task queue is based on bottom half, while the bottom half is based on the newly introduced tasklet in v2.4.x.
  
The main consideration for introducing tasklet is to better support SMP and improve the utilization of multiple SMP CPUs: Different tasklets can run on different CPUs at the same time. In its source code comments, it also describes several features, which come down to one point: the same tasklet will only run on one CPU.
  
Struct tasklet_struct
{
Struct tasklet_struct * Next;/* queue pointer */
Unsigned long state;/* tasklet state, which is operated by bit. Currently, the meaning of two bits is defined:
Tasklet_state_sched (0th bits) or tasklet_state_run (1st bits )*/
Atomic_t count;/* Reference count, usually 1 indicates disabled */
Void (* func) (unsigned long);/* function pointer */
Unsigned long data;/* func (data )*/
};
  
Comparing the above structure with tq_struct, we can see that tasklet has extended a bit of functionality, mainly the state attribute, for synchronization between CPUs.
  
Tasklet is quite simple to use:
  
Define a processing function void my_tasklet_func (unsigned long );
Declare_tasklet (my_tasklet, my_tasklet_func, data );/*
Define a tasklet structure my_tasklet, which is associated with the my_tasklet_func (data) function, which is equivalent to declare_task_queue ()*/
Tasklet_schedule (& my_tasklet );/*
Register my_tasklet and allow the system to schedule and run it when appropriate. This is equivalent to queue_task (& my_task, & tq_immediate) and mark_bh (immediate_bh )*/
It can be seen that tasklet is easier to use than task queue, and tasklet supports SMP structure better. Therefore, tasklet is a recommended asynchronous task execution mechanism in the new 2.4.x kernel. In addition to the above steps, the tasklet mechanism also provides some calling interfaces:
  
Declare_tasklet_disabled (name, function, data );/*
Similar to declare_tasklet (), but it does not run immediately even if it is scheduled. You must wait until enable */
Tasklet_enable (struct tasklet_struct *);/* tasklet enabling */
Tasklet_disble (struct tasklet_struct *);/* disable tasklet. As long as the tasklet is not running, it will be postponed until it is enabled */
Tasklet_init (struct tasklet_struct *, void (* func) (unsigned long), unsigned long);/* similar to declare_tasklet ()*/
Tasklet_kill (struct tasklet_struct *);/* clear the scheduling bit of the specified tasklet, that is, scheduling of the tasklet is not allowed, but the tasklet itself is not cleared */
  
As mentioned above, in the 2.4.x kernel, bottom half is implemented using the tasklet mechanism, which is manifested in that all bottom half actions run in the form of a tasklet, this type of tasklet is different from the tasklet we generally use.
  
In 2.4.x, the system defines the vector table of two tasklet queues. Each vector corresponds to one CPU (the vector table size is the maximum number of CPUs supported by the system, and the current 2.4.2 is 32 in SMP mode) organized into a tasklet linked list:
  
Struct tasklet_head tasklet_vec [nr_cpus] _ cacheline_aligned;
Struct tasklet_head tasklet_hi_vec [nr_cpus] _ cacheline_aligned;
  
In addition, for 32 bottom half, the system also defines the corresponding 32 tasklet structure:
  
Struct tasklet_struct bh_task_vec [32];
When the Soft Interrupt subsystem is initialized, this group of tasklet actions are initialized to bh_action (NR), and bh_action (NR) will call the function pointer of bh_base [Nr, this is linked to the semantics of bottom half. Mark_bh (NR) is implemented to call tasklet_hi_schedule (bh_tasklet_vec + nR). In this function, bh_tasklet_vec [Nr] will be attached to the tasklet_hi_vec [CPU] Chain (where the CPU is the current CPU number, that is, which CPU initiates a bottom half request, then, the hi_softirq Soft Interrupt signal is triggered to start running in the interrupt response of hi_softirq.
  
Tasklet_schedule (& my_tasklet) will link my_tasklet to tasklet_vec [CPU] to stimulate tasklet_softirq and execute it in the interrupt response of tasklet_softirq. Hi_softirq and tasklet_softirq are the terms in the softirq subsystem. We will introduce them in the next section.
  
5. softirq
  
We can see from the previous discussion that the task queue is based on bottom half, the bottom half is based on tasklet, And the tasklet is based on softirq.
  
In this case, softirq follows the earliest bottom half idea, but on the basis of this "bottom half" mechanism, it has implemented a larger and more complex Soft Interrupt subsystem.
  
Struct softirq_action
{
Void (* Action) (struct softirq_action *);
Void * data;
};
Static struct softirq_action softirq_vec [32] _ cacheline_aligned;
This softirq_vec [] Only adds the parameter of the action () function than bh_base []. In execution, softirq has fewer restrictions than bottom half.
  
Similar to bottom half, the system also predefines the functions of several softirq_vec [] structures, which are represented by the following enumeration:
  
Enum
{
Hi_softirq = 0,
Net_tx_softirq,
Net_rx_softirq,
Tasklet_softirq
};
Hi_softirq is used to implement bottom half. tasklet_softirq is used for public tasklets, and net_tx_softirq and net_rx_softirq are used to send and receive packets from the network subsystem. When the Soft Interrupt subsystem is initialized (softirq_init (), open_softirq () is called to initialize hi_softirq and tasklet_softirq:
  
Void open_softirq (int nr, void (* Action) (struct softirq_action *), void * Data)
  
Open_softirq () will fill in softirq_vec [Nr] And set action and data as input parameters. Tasklet_softirq is filled with tasklet_action (null) and hi_softirq is filled with tasklet_hi_action (null). In the do_softirq () function, these two functions are called, start tasklet on the tasklet_vec [CPU] And tasklet_hi_vec [CPU] chains respectively.
  
Static inline void _ cpu_raise_softirq (int cpu, int nr)
  
This function is used to activate the Soft Interrupt, which is actually the active position 1 of the Soft Interrupt of CPU No. nr. In do_softirq (), this active bit is determined. Both tasklet_schedule () and tasklet_hi_schedule () call this function.
  
Do_softirq () has four execution times: return from the System Call (ARCH/i386/kernel/entry. s: entry (ret_from_sys_call), return from the exception (ARCH/i386/kernel/entry. s: ret_from_exception label), In the scheduler (kernel/sched. c: Schedule () and kernel/IRQ. c: do_irq ()). It traverses all softirq_vec and starts the actions () in sequence (). Note that the Soft Interrupt service program cannot be executed in the hard interrupt service program or nested in the Soft Interrupt service program, however, multiple Soft Interrupt service programs can be concurrently deployed on multiple CPUs.

 

Source: http://blog.csdn.net/yuanyufei/archive/2006/06/06/776263.aspx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.