Linux kernel tasklet mechanism and Work queue ZZ

Source: Internet
Author: User

http://blog.jobbole.com/107110/

1. Tasklet mechanism Analysis

Above we introduce the soft interrupt mechanism, why should the Linux kernel introduce the tasklet mechanism? The main reason is the soft interrupt of the pending flag bit is 32, the general situation is not arbitrarily increased soft interrupt processing. And the kernel does not provide a common interface for adding soft interrupts. Second, the soft interrupt processing function requires re-entry, need to take into account the more competitive conditions, require relatively high programming skills. So the kernel provides a common mechanism such as tasklet.

In fact, every time you write a summary of the article, always want to put the details of things to understand, so the more write more. The benefit of doing this is to really understand the mechanism. However, too much content of a disadvantage is not memory, so, in speaking clearly in detail at the same time, I also have to summarize the essence. Tasklet features, but also the essence of Tasklet is: Tasklet can not sleep, the same tasklet can not run on two CPUs simultaneously, but different tasklet may be running at the same time on different CPUs, you need to pay attention to the protection of shared data.

Main data structures

Static define_per_cpu (struct tasklet_head, Tasklet_vec);

Static define_per_cpu (struct tasklet_head, Tasklet_hi_vec);

How to use Tasklet

Using Tasklet is simple, you only need to initialize a tasklet_struct struct, and then call Tasklet_schedule, you can use the Tasklet mechanism to perform the initialization of the Func function.

static inline void Tasklet_schedule (struct tasklet_struct *t) {    if (!test_and_set_bit (tasklet_state_sched, &t- >state))        __tasklet_schedule (t);}

Tasklet_schedule processing process is also relatively simple, that is, the TASKLET_STRUCT structure is linked to the Tasklet_vec linked list or linked to the Tasklet_hi_vec linked list, and dispatch soft interrupt TASKLET_SOFTIRQ or Hi_ SOFTIRQ

void __tasklet_schedule (struct tasklet_struct *t) {    unsigned long flags;local_irq_save (flags);    T->next = NULL;    *__get_cpu_var (Tasklet_vec). Tail = t;    __get_cpu_var (Tasklet_vec). Tail = & (T->next);    Raise_softirq_irqoff (TASKLET_SOFTIRQ);    Local_irq_restore (flags);} Export_symbol (__tasklet_schedule); void __tasklet_hi_schedule (struct tasklet_struct *t) {    unsigned long flags;        Local_irq_save (flags);    T->next = NULL;    *__get_cpu_var (Tasklet_hi_vec). Tail = t;    __get_cpu_var (Tasklet_hi_vec). Tail = & (T->next);    Raise_softirq_irqoff (HI_SOFTIRQ);    Local_irq_restore (flags);} Export_symbol (__tasklet_hi_schedule);

  

Tasklet Execution Process

The

Tasklet_action is executed when the soft interrupt Tasklet_softirq is dispatched, and it takes the tasklet_struct struct from the Tasklet_vec list and executes it one by one. If the value of T->count equals 0, it means that the Tasklet is disable out after dispatch, so the Tasklet structure is re-placed back into the Tasklet_vec list and the TASKLET_SOFTIRQ soft interrupt is re-dispatched. After you enable this tasklet, re-execute it.

static void Tasklet_action (struct softirq_action *a) {struct tasklet_struct *list;local_irq_disable ();    List = __get_cpu_var (Tasklet_vec). Head;    __get_cpu_var (Tasklet_vec). Head = NULL;    __get_cpu_var (Tasklet_vec). Tail = &__get_cpu_var (Tasklet_vec). Head;        Local_irq_enable ();                while (list) {struct tasklet_struct *t = list;                List = list->next; if (Tasklet_trylock (t)) {if (!atomic_read (&t->count)) {if (!test_and_                Clear_bit (tasklet_state_sched, &t->state)) BUG ();                T->func (T->data);                Tasklet_unlock (t);            Continue        } tasklet_unlock (t);        } local_irq_disable ();        T->next = NULL;        *__get_cpu_var (Tasklet_vec). Tail = t;        __get_cpu_var (Tasklet_vec). Tail = & (T->next);        __raise_softirq_irqoff (TASKLET_SOFTIRQ);    Local_irq_enable ();}} 

  

2. Linux Task Force column

The tasklet mechanism has been introduced earlier, why do we need to increase the working queue mechanism with the tasklet mechanism? my understanding is that due to the limitations of the tasklet mechanism, there are many limitations to the callback functions in the deformation tasklet, such as the inability to have dormant operations and so on. Instead, a work queue mechanism is used, and functions that need to be processed are called in the context of the process, and hibernate operations are allowed. However, the real-time performance of the work queue is not as good as tasklet, and routines using a work queue may not be invoked for a short time.

Data structure Description

The first thing you need to explain is the two data structures workqueue_struct and cpu_workqueue_struct, creating a work queue that first needs to create workqueue_struct, and then you can create a cpu_ on each CPU WORKQUEUE_STRUCT management structure.

struct cpu_workqueue_struct{    spinlock_t lock;        struct List_head worklist;    wait_queue_head_t more_work;    struct work_struct *current_work;        struct workqueue_struct *wq;    struct task_struct *thread;        int run_depth;        /* Detect Run_workqueue () recursion depth */} ____cacheline_aligned;/* * The externally visible workqueue abstraction is a n Array of * PER-CPU workqueues: */struct workqueue_struct{    struct cpu_workqueue_struct *cpu_wq;    struct List_head list;    const char *name;    int singlethread;    int freezeable;        /* Freeze Threads during suspend */    int rt; #ifdef CONFIG_LOCKDEP    struct lockdep_map lockdep_map; #endif};

Work_struct represents the work of the processing that will be submitted.

The relationship of the above three data structures as shown

The purpose of introducing the main data structure is not to be clear about the specifics of the work queue, the main purpose is to give you a general outline of the architecture. The specific analysis is carried out below. From the above model of the main data structure of the relationship, the main need to analyze the following issues:

1. How Workqueque was created, including the creation of the event/0 kernel process

2. How the Work_queue is submitted to the work queue

3. How the event/0 kernel process handles the work submitted to the queue

Creation of Workqueque

First, the memory of workqueue_struct structure is applied, and the CPU_WORKQUEUE_STRUCT structure is stored in the body. The cpu_workqueue_struct struct is then initialized in the Init_cpu_workqueue function. Call the Create_workqueue_thread function at the same time to create a kernel process that handles work queues.

The following kernel processes are created in Create_workqueue_thread

p = kthread_create (Worker_thread, Cwq, FMT, wq->name, CPU);

Finally, call Start_workqueue_thread to start the newly created process.

struct workqueue_struct *__create_workqueue_key (const char *name, int singl  Ethread, int freezeable, int                                                RT, struct Lock_class_key *key,    const char *lock_name) {struct workqueue_struct *wq;    struct Cpu_workqueue_struct *cwq;    int err = 0, Cpu;wq = kzalloc (sizeof (*WQ), Gfp_kernel);        if (!WQ) return NULL;    Wq->cpu_wq = alloc_percpu (struct cpu_workqueue_struct);        if (!WQ->CPU_WQ) {kfree (Wq);    return NULL;    } wq->name = name;    Lockdep_init_map (&wq->lockdep_map, Lock_name, key, 0);    Wq->singlethread = Singlethread;    wq->freezeable = freezeable;    Wq->rt = RT;        Init_list_head (&wq->list);  if (singlethread) {cwq = Init_cpu_workqueue (Wq, SINGLETHREAD_CPU);      Err = Create_workqueue_thread (Cwq, SINGLETHREAD_CPU);    Start_workqueue_thread (Cwq,-1);        } else {cpu_maps_update_begin ();         /* * We must place this WQ on list even if the code below fails.  * Cpu_down (CPU) can remove CPU from Cpu_populated_map before * Destroy_workqueue () takes the lock, in this case we         Leak * Cwq[cpu]->thread.        */Spin_lock (&workqueue_lock);        List_add (&wq->list, &workqueues);        Spin_unlock (&workqueue_lock); /* * We must initialize CWQS for each possible CPU even if We * is going to call Destroy_workqueue () FINA Lly.         Otherwise * CPU_UP () can hit the uninitialized cwq once we drop the * lock.            */FOR_EACH_POSSIBLE_CPU (CPU) {Cwq = Init_cpu_workqueue (Wq, CPU);            if (Err | | |!cpu_online (CPU)) continue;            Err = Create_workqueue_thread (CWQ, CPU); Start_Workqueue_thread (CWQ, CPU);    } cpu_maps_update_done ();        } if (err) {destroy_workqueue (Wq);    Wq = NULL; } return WQ;} EXPORT_SYMBOL_GPL (__create_workqueue_key);

  

Adding work to the work queue

The Shedule_work function adds a task to the work queue. This interface is relatively simple, nothing more than a few queue operations, no longer described.

/** * schedule_work-put work task in global Workqueue * @work: Job to being done * * This puts a job in the Kernel-global W Orkqueue. */int schedule_work (struct work_struct *work) {    return queue_work (KEVENTD_WQ, work);} Export_symbol (schedule_work);

  

Processing process of the work queue kernel process

When we created the work queue, we created one or more processes to handle the work that was put on the queue. This kernel process main function body is worker_thread, this function is interesting place is, oneself lowers the priority level, indicates the Worker_thread scheduling priority is relatively low. There may be a significant delay in the operation of a work queue when the system is heavily loaded.

In terms of the execution flow of the function, it is really simple, just take out work from the queue, remove it from the queue, clear out the pending tag, and execute the callback function set by the work.

static int worker_thread (void *__cwq) {struct cpu_workqueue_struct *cwq = __cwq;        Define_wait (WAIT); if (cwq->wq->freezeable) set_freezable ();        Set_user_nice (Current,-5);    for (;;)        {prepare_to_wait (&cwq->more_work, &wait, task_interruptible); if (!freezing (current) &&!kthread_should_stop () && List_empty (&cwq->worklist        )) schedule ();                Finish_wait (&cwq->more_work, &wait);                Try_to_freeze ();                if (Kthread_should_stop ()) break;    Run_workqueue (CWQ); } return 0;}    static void Run_workqueue (struct cpu_workqueue_struct *cwq) {SPIN_LOCK_IRQ (&cwq->lock);    cwq->run_depth++; if (Cwq->run_depth > 3) {/* Morton gets to eat his hat */PRINTK ("%s:recursion depth exceeded:%d        N ", __func__, cwq->run_depth);    Dump_stack (); } while (!list_empty (&cwq->worklist) {struct Work_struct *work = List_entry (Cwq->worklist.next,        struct work_struct, entry);         work_func_t f = work->func; #ifdef CONFIG_LOCKDEP * * * It is permissible to free the struct work_struct         * From inside the function, which is called from it, * this we need-to-take into account for LOCKDEP too.         * To avoid bogus "held lock freed" warnings as well * as problems when looking into Work->lockdep_map,         * Make a copy and use this here.        */struct Lockdep_map lockdep_map = work->lockdep_map; #endifcwq->current_work = work;        List_del_init (Cwq->worklist.next);                SPIN_UNLOCK_IRQ (&cwq->lock);        BUG_ON (Get_wq_data (work)! = CWQ);        Work_clear_pending (work);        Lock_map_acquire (&CWQ->WQ->LOCKDEP_MAP);        Lock_map_acquire (&LOCKDEP_MAP);        f (work); Lock_map_release (& lockdep_map);                Lock_map_release (&CWQ->WQ->LOCKDEP_MAP); if (Unlikely (in_atomic () | | lockdep_depth (current) > 0)) {PRINTK (kern_err "Bug:workqueue leaked loc K or atomic: "%s/0x%08x/%dn", Current->comm, Preempt_count (), t            ASK_PID_NR (current));            PRINTK (Kern_err "last function:");            Print_symbol ("%sn", (unsigned long) f);            Debug_show_held_locks (current);        Dump_stack ();        } SPIN_LOCK_IRQ (&cwq->lock);    Cwq->current_work = NULL;    } cwq->run_depth--; SPIN_UNLOCK_IRQ (&cwq->lock);}

  

Linux kernel tasklet mechanism and Work queue ZZ

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.