Linux Kernel Practice Work queue

Source: Internet
Author: User

The work queue is another form of deferred work, which differs from Tasklet. The work queue can be pushed back to a kernel thread to execute, that is, the lower part can be executed in the context of the process. In this way, code executed through the work queue can take advantage of all the advantages of the process context. The most important thing is that the work queue is allowed to be re-dispatched or even asleep.

So, under what circumstances use the work queue, and under what circumstances use Tasklet. If the deferred task requires sleep, select the work queue. If the deferred task does not require sleep, then select Tasklet. In addition, if you need to use an entity that can be re-dispatched to perform your next-half processing, you should also work with the task queue. It is the only mechanism that can be implemented in the bottom half of the process context, and only it can sleep. This means that when you need to get a lot of memory, when you need to get a semaphore, it can be very useful when you need to perform blocking I/O operations. If you do not need to use a kernel thread to postpone your work, consider using Tasklet.

1. Work, work queues, and worker threads

As mentioned earlier, we call the deferred task a work, which describes its data structure as WORK_STRUCT, which is organized into a queue structure as a work queue (Workqueue) with a data structure of workqueue_struct. The worker thread is responsible for performing the work in the work queue. The default worker thread for the system is events, and you can create your own worker threads.

2. Data structure representing the work

The WORK_STRUCT structure defined in work <linux/workqueue.h> is represented by:

struct Work_struct {

atomic_long_t data;

#define WORK_STRUCT_PENDING 0/* T If Work item PENDING execution */

#define WORK_STRUCT_FLAG_MASK (3UL)

#define WORK_STRUCT_WQ_DATA_MASK (~work_struct_flag_mask)

struct List_head entry;

work_func_t func;

#ifdef CONFIG_LOCKDEP

struct Lockdep_map lockdep_map;

#endif

};

These structures are linked into lists. When a worker thread is awakened, it performs all the work on its linked list. When the work is done, it removes the corresponding Work_struct object from the list. When there are no more objects on the list, it will continue to hibernate.

3. Create a deferred job

To use a work queue, the first thing to do is to create some work that needs to be pushed backwards. The structure can be statically built at compile time by Declare_work:

Declare_work (name, void (*func) (void*), void *data);

This will statically create a work_struct structure named name, the function to execute is func, and the parameter is data.

Similarly, you can create a job with pointers at run time:

Init_work (struct work_struct *work, woid (*func) (void *), void *data);

This dynamically initializes a work that is directed to the job.

4. Functions to be executed in the work queue

The function prototypes to be executed by the work queue are:

void Work_handler (Void*data)

This function is executed by a worker thread, so the function runs in the context of the process. By default, response interrupts are allowed and no locks are held. If necessary, the function can sleep. It is important to note that although the function runs in the context of the process, it does not have access to the user space because the kernel thread does not have a related memory mapping in user space. Typically, when a system call occurs, the kernel runs on behalf of the user-space process, at which point it can access the user space, and only then it maps the memory of the user space.

5. Scheduling the work

Now that the work has been created, we can dispatch it. To submit the pending function for a given job to the default events worker thread, simply call the

Schedule_work (&work);

Work is dispatched immediately, and once the worker thread on the processor on which it is located is awakened, it is executed.

Sometimes you don't want your work to be executed right away, but hopefully it will be executed after a delay. In this case, you can schedule it to execute at a specified time:

Schedule_delayed_work (&work,delay);

At this point, the work_struct that &work points to will not execute until the clock ticks specified by delay are exhausted.

The above Content section is excerpted from: http://blog.csdn.net/zyhorse2010/article/details/6455026

6. Simple application of the work queue

In the workqueue mechanism, a system default Workqueue queue--keventd_wq is provided, which is created by the Linux system at initialization time. Users can initialize a Work_struct object directly and then schedule it in that queue for easier use.

When the user calls Workqueue's initialization interface Create_workqueue or Create_singlethread_workqueue initializes the Workqueue queue, the kernel begins assigning a Workqueue object to the user. And chain it to a global workqueue queue. Linux then allocates a Cpu_workqueue_struct object with the same number of CPUs for the Workqueue object based on the current CPU, and each Cpu_workqueue_struct object has a task queue. Next, Linux allocates a kernel thread for each Cpu_workqueue_struct object, which is the kernel daemon to handle the tasks in each queue. At this point, the user invokes the initialization interface to complete the Workqueue initialization and returns the Workqueue pointer.

During the initialization of the workqueue process, the kernel needs to initialize kernel threads, and the registered kernel threads work relatively simply by scanning the task queue in the corresponding cpu_workqueue_struct, getting a valid task, and then performing the task. So if the task queue is empty, then kernel daemon sleeps on the waiting queue in cpu_workqueue_struct until someone wakes daemon to process the task queue.

Once the workqueue is initialized, the context of the task run is built, but there is no executable task, so you need to define a specific Work_struct object. Then add the work_struct to the task queue, and Linux wakes up the daemon to handle the task.

The above is excerpted from: http://hi.baidu.com/%CD%EA%C3%C0%CB%C4%C4%EA/blog/item/412b8833ca91b2e61b4cff5b.html

7. Additions and experiments

For the kernel out-of-the-box queue, we use Queue_schedule to join the system's default Workqueue queue--keventd_wq and dispatch execution after we initialize the work, and we need to use Create_queue to create works for our newly-created working queue. _queue, and then initialize the work, and finally, use Queue_work to join the working queue we created and schedule execution.

The following are examples of the use of a two-way scenario:

#include <linux/init.h>

#include <linux/kernel.h>

#include <linux/module.h>

Module_author ("Mike Feng");

/* Test Data structure */

struct My_data

{

Structwork_struct my_work;

Intvalue;

};

struct Workqueue_struct *wq=null;

struct Work_struct work_queue;

/* Initialize our test data */

struct my_data* init_data (structmy_data *md)

{

Md= (structmy_data*) kmalloc (sizeof (struct my_data), gfp_kernel);

md->value=1;

md->my_work=work_queue;

RETURNMD;

}

/* Task Force column function */

static void Work_func (struct work_struct *work)

{

Structmy_data *md=container_of (work,structmy_data,my_work);

PRINTK ("<2>" "Thevalue of My Data is:%d\n", md->value);

}

Static __init intwork_init (void)

{

Structmy_data *md=null;

Structmy_data *md2=null;

Md2=init_data (MD2);

Md=init_data (MD);

md2->value=20;

md->value=10;

/* First way: Use the system default Workqueue queue--keventd_wq, direct dispatch */

Init_work (&md->my_work,work_func);

Schedule_work (&md->my_work);

/* The second way: Create your own Work queue, add work to the Work queue (join the kernel and schedule it to execute) */

Wq=create_workqueue ("test");

Init_work (&md2->my_work,work_func);

Queue_work (wq,&md2->my_work);

Return0;

}

static void Work_exit (void)

{

/* Task Queue destruction */

Destroy_workqueue (Wq);

}

Module_init (Work_init);

Module_exit (Work_exit);

Experimental results:

Linux Kernel Practice Work queue

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.