Linux Tasklet and Workqueue Learning

Source: Internet
Author: User

Interrupt service programs are typically executed under the condition that the interrupt request is closed to avoid nesting and complicate the interrupt control. However, the interrupt is a random event, it will come at any time, if the shutdown time is too long, the CPU will not be able to respond to other interrupt requests in a timely manner, resulting in the loss of interrupts. Therefore, the goal of the Linux kernel is to handle the interrupt request as quickly as possible and to defer further processing as much as it can. For example, suppose a block of data has reached a network cable, and when the interrupt controller accepts the interrupt request signal, the Linux kernel simply flags the data and then restores the processor to its previous state, and the rest of the processing takes place later (such as moving data into a buffer, The process that accepts the data can find the data in the buffer. Therefore, the kernel divides the interrupt processing into two parts: the top half (tophalf) and the lower half (bottomhalf), the top half (the Interrupt service program) kernel is executed immediately, and the lower half (which is some kernel functions) is left for later processing,

First, a fast "top half" to handle requests made by the hardware, which must be terminated before a new interrupt is generated. In general, this part does little work except to move or transfer data between the device and some memory buffers (more than that if your device uses DMA) to determine whether the hardware is in a healthy state.

The bottom half of the runtime is to allow interrupt requests, while the upper half of the runtime is off-break, which is the main difference between the two.

But when exactly does the kernel execute the bottom half, and in what way is the bottom part organized? This is the second half of the implementation mechanism that we are going to discuss, which has been improved in the evolution of the kernel, which was called bottomhalf (BH) in the previous kernel. New developments and improvements have been made in the 2.4 release, with the goal of improving the bottom half to be executed in parallel on a multi-processor machine, and to help driver developers develop driver programs. The Common Small task (Tasklet) mechanism and the work queue mechanism in the 2.6 kernel are described below.

1.Tasklet

The tasklet here refers to a mechanism for organizing functions to be deferred. Its data structure is tasklet_struct, each structure represents a separate small task, which is defined as follows:

struct Tasklet_struct {
struct Tasklet_struct *next; /* point to the next structure in the list */
Unsignedlong State; /* Status of small tasks */
Atomic_tcount; /* Reference counter */
void (*func) (unsigned long); /* The function to invoke */
Unsignedlong data; /* Arguments passed to the function */
};
The Func field in the structure is the function to defer execution in the lower half, and data is its only parameter.
The state field is evaluated as tasklet_state_sched or Tasklet_state_run. Tasklet_state_sched indicates that a small task has been dispatched and is ready to be put into operation, Tasklet_state_run indicates that the small task is running. Tasklet_state_run is only used on multiprocessor systems, when a single-processor system knows when a small task is not running (it is either the code that is currently executing, or it is not). The Count field is a reference counter for small tasks. If it is not 0, the small task is forbidden, not allowed to execute, and only if it is zero, the small task is activated and the small task is able to execute when it is set to suspend.

1) Declaration and use of small tasks in most cases, in order to control an unusual hardware device, the small task mechanism is the best choice for realizing the lower half.

Small tasks can be created dynamically, easy to use, and faster to execute.

We can either create a small task statically or create it dynamically. Choosing that way depends on whether you want to make a direct reference to a small task or an indirect reference.

If you are ready to create a small task statically (that is, direct reference to it), use one of the following two macros:
Declare_tasklet (Name,func, data)
Declare_tasklet_disabled (Name,func, data)
Both macros can statically create a tasklet_struct structure based on the given name. When the small task is dispatched, the given function func is executed, and its arguments are given by data. The difference between the two macros is that the initial value setting of the reference counter is different. The first macro sets the reference counter of the created small task to 0, so the small task is active. The other one sets the reference counter to 1, so the small task is in a forbidden state such as:

Declare_tasklet (My_tasklet,my_tasklet_handler, Dev);
This line of code is actually equivalent to
struct Tasklet_struct My_tasklet = {NULL, 0, atomic_init (0), Tasklet_handler,dev};
This creates a small task named My_tasklet, whose handler is Tasklet_handler and has been activated. When the handler is called, Dev is passed to it.

2) Write your own small task handler the small task handler must conform to the following function types:
void Tasklet_handler (unsigned Long data)
Because small tasks cannot sleep, they cannot use semaphores or other blocking functions in small tasks. However, small tasks can respond to interruptions when they run.


3) dispatch its own small task by calling the Tasklet_schedule () function and passing it to its corresponding tasklt_struct pointer, the small task will be dispatched for the appropriate time to execute:
Tasklet_schedule (&my_tasklet); /* Mark My_tasklet as Pending */
After a small task is dispatched, it runs as early as possible as soon as it has the chance. If an identical small task is dispatched again before it has yet to run, it will still run only once.
You can call the Tasklet_disable () function to suppress a specified small task. If the small task is currently executing, the function waits until it finishes executing and returns.

Call the Tasklet_enable () function to activate a small task, and if you want to activate a small task created with declare_tasklet_disabled (), call this function, such as:
Tasklet_disable (&my_tasklet); /* Small task is now banned, this small task cannot be run */
Tasklet_enable (&my_tasklet); /* Small task is now activated */
You can also call the Tasklet_kill () function to remove a small task from the Suspended queue. The argument to the function is a long pointer to the tasklet_struct of a small task. It can be useful to remove a scheduled small task from a suspended queue when a small task re-dispatches itself. This function first waits for the small task to complete and then moves it away.

#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/interrupt.h>

Define a Devid
static int mydev=1119;
static int IRQ;
Static char* Devname=null;
Define arguments for the This module
Module_param (irq,int,0644);
Module_param (devname,charp,0644);

Define a argument of tasklet struct
static struct tasklet_struct mytasklet;

static void Mytasklet_handler (unsigned Long data)
{
PRINTK ("This is Tasklet handler. /n ");
}

static irqreturn_t myirq_handler (int irq,void* dev)
{
static int count=0;
if (count<10)
{
PRINTK ("-----------%d start--------------------------/n", count+1);
PRINTK ("The interrupt Handeler is working. /n ");
PRINTK ("The most of the interrupt work would be is done by following Tasklet. /n ");
Tasklet_init (&mytasklet,mytasklet_handler,0);
Tasklet_schedule (&mytasklet);
PRINTK ("The top half have been done and bottom half would be processed. /n ");
}
count++;
return irq_handled;
}

static int __init Mytasklet_init ()
{
Request a IRQ

PRINTK ("My module is working.. /n ");
if (REQUEST_IRQ (IRQ,MYIRQ_HANDLER,IRQF_SHARED,DEVNAME,&AMP;IRQ)!=0)
{
PRINTK ("Tasklet_init:can not request IRQ%d for%s:", irq,devname);
return-1;
}
PRINTK ("%s Request irq:%d success: /n ", DEVNAME,IRQ);
return 0;
}

static void __exit Mytasklet_exit ()
{
PRINTK ("My module is leaving.. /n ");
FREE_IRQ (IRQ,&AMP;IRQ);
PRINTK ("Free the IRQ%d. /n ", IRQ);
}

Module_init (Mytasklet_init);
Module_exit (Mytasklet_exit);
Module_license ("GPL");

2.Workquue

The work queue is another form of deferred work, which differs from the Tasklet discussed earlier. The work queue can be pushed back to a kernel thread to execute, that is, the lower part can be executed in the context of the process. In this way, code executed through the work queue can take advantage of all the advantages of the process context. The most important thing is that the work queue is allowed to be re-dispatched or even asleep.

So, under what circumstances use the work queue, and under what circumstances use Tasklet. If the deferred task requires sleep, select the work queue. If the deferred task does not require sleep, then select Tasklet. In addition, if you need to use an entity that can be re-dispatched to perform your next-half processing, you should also work with the task queue. It is the only mechanism that can be implemented in the bottom half of the process context, and only it can sleep. This means that when you need to get a lot of memory, when you need to get a semaphore, it can be very useful when you need to perform blocking I/O operations. If you do not need to use a kernel thread to postpone your work, consider using Tasklet.

1). Work, work queues, and worker threads

As mentioned earlier, we call the deferred task a work, which describes its data structure as WORK_STRUCT, which is organized into a queue structure as a work queue (Workqueue), whose data structures are workqueue_struct, The worker thread is responsible for performing the work in the work queue. The default worker thread for the system is events, and you can create your own worker threads.

struct workqueue_struct *create_workqueue (const char *name);
struct workqueue_struct *create_singlethread_workqueue (const char *name);
void Destroy_workqueue (struct workqueue_struct *queue);

Each work queue has one or more dedicated processes ("Kernel threads") that run functions that are submitted to this queue. If you use Create_workqueue, you get a Task force column that has a dedicated thread on each processor in the system. In many cases, excessive threading has an impact on system performance, and if a single thread is sufficient, use Create_singlethread_workqueue to create a work queue. When you run out of a work queue, you can use Destroy_workqueque to remove it.

int queue_work (struct workqueue_struct *queue, struct work_struct *work);
int queue_delayed_work (struct workqueue_struct *queue, struct work_struct *work, unsigned long delay);
A function that queues work from a Task Force column.


int cancel_delayed_work (struct work_struct *work);
void Flush_workqueue (struct workqueue_struct *queue);
Use Cancel_delayed_work to remove entries from a work queue; Flush_workqueue ensure that no Work queue entry runs anywhere in the system.

2). Data structure representing the work

The WORK_STRUCT structure defined in work <linux/workqueue.h> is represented by:

struct work_struct{

unsigned long pending; /* is the job waiting to be processed? */

struct List_head entry; /* Link list for all work */

void (*func) (void *); /* The function to execute */

void *data; /* Arguments passed to the function */

void *wq_data; /* Internal Use */

struct Timer_list timer; /* Timers used for deferred work queues */

};

These structures are linked into lists. When a worker thread is awakened, it performs all the work on its linked list. When the work is done, it removes the corresponding Work_struct object from the list. When there are no more objects on the list, it will continue to hibernate.

3). Create an postponed job

To use a work queue, the first thing to do is to create some work that needs to be pushed backwards. The structure can be statically built at compile time by Declare_work:

Declare_work (name, void (*FUNC) (void *), void *data);

This will statically create a work_struct structure named name, the function to execute is func, and the parameter is data.

Similarly, you can create a job with pointers at run time:

Init_work (struct work_struct *work, woid (*func) (void *), void *data);

This dynamically initializes a work that is directed to the job.

4). Functions to be executed in the work queue

The function prototypes to be executed by the work queue are:

void Work_handler (void *data)

This function is executed by a worker thread, so the function runs in the context of the process. By default, response interrupts are allowed and no locks are held. If necessary, the function can sleep. It is important to note that although the function runs in the context of the process, it does not have access to the user space because the kernel thread does not have a related memory mapping in user space. Typically, when a system call occurs, the kernel runs on behalf of the user-space process, at which point it can access the user space, and only then it maps the memory of the user space.

5). Scheduling the work

Now that the work has been created, we can dispatch it. To submit a given job's pending function to the system's default events worker thread, simply call

Schedule_work (&work);

Work is dispatched immediately, and once the worker thread on the processor on which it is located is awakened, it is executed.

Sometimes you don't want your work to be executed right away, but hopefully it will be executed after a delay. In this case, you can schedule it to execute at a specified time:

Schedule_delayed_work (&work, delay);

At this point, the work_struct that &work points to will not execute until the clock ticks specified by delay are exhausted.

#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/interrupt.h>

Define a Devid
static int mydev=1119;
static int IRQ;
Static char* Devname=null;
Define arguments for the This module
Module_param (irq,int,0644);
Module_param (devname,charp,0644);

Define a argument of work struct
static struct work_struct mywork;

static void Mywork_handler (unsigned Long data)
{
PRINTK ("This is Workhandler. /n ");
}

static irqreturn_t myirq_handler (int irq,void* dev)
{
static int count=0;
if (count<10)
{
PRINTK ("-----------%d start--------------------------/n", count+1);
PRINTK ("The interrupt Handeler is working. /n ");
PRINTK ("The most of the interrupt work would be is done by following work. /n ");
Init_work (&mywork,mywork_handler,0);
Schedule_work (&mywork);
PRINTK ("The top half have been done and bottom half would be processed. /n ");
}
count++;
return irq_handled;
}

static int __init Mywork_init ()
{
Request a IRQ

PRINTK ("My module is working.. /n ");
if (REQUEST_IRQ (IRQ,MYIRQ_HANDLER,IRQF_SHARED,DEVNAME,&AMP;IRQ)!=0)
{
PRINTK ("Work_init:can not request IRQ%d for%s:", irq,devname);
return-1;
}
PRINTK ("%s Request irq:%d success: /n ", DEVNAME,IRQ);
return 0;
}

static void __exit Mywork_exit ()
{
PRINTK ("My module is leaving.. /n ");
FREE_IRQ (IRQ,&AMP;IRQ);
PRINTK ("Free the IRQ%d. /n ", IRQ);
}

Module_init (Mywork_init);
Module_exit (Mywork_exit);
Module_license ("GPL");

3. Differences

Tasklet

Workqueue

In atomic context, cannot sleep

Not in atomic context, can sleep

The OS is not capable of process scheduling in the interrupt context

In the process context, the OS can schedule the process

Run the same CPU on which they are scheduled

Default on the same CPU

Cannot specify time to schedule

You cannot specify a time to schedule or specify at least one time delay after which the schedule will be determined

Can only be handed to ksoftirqd/0

can be submitted to events/0, or can be submitted to a custom Workqueue

The different application environments of Tasklet and Workqueue are summarized as follows:

(1) A very small number of tasks that must be urgently dealt with immediately are placed in the top half of the interrupt, shielding themselves from the same type of interruption, and due to the small amount of tasks, the emergency task can be handled quickly and without interruption.

(2) A moderate number of urgent tasks that require less time are placed in Tasklet. No interrupts (including the same type of interrupts as their top halves) are blocked at this time, so the top half is not affected by the handling of the emergency, and the user process is not dispatched, thus ensuring that the urgent task is quickly completed.

(3) A large number of tasks that require more time and are not urgent (allowing the operating system to be deprived of operational rights) are placed in the workqueue. At this point the operating system will handle this task as quickly as possible, but if the task volume is too large, the operating system will also have the opportunity to schedule other user processes to run, ensuring that no other user process will be able to do so because the task requires a run time.

(4) Tasks that may cause sleep are placed in the workqueue. Because sleep is safe in workqueue.

Linux Tasklet and Workqueue Learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.