Next, let's talk about Soft Interrupt and tasklet in the previous section. This is the work queue ..
The work queue is different from other methods discussed earlier. It can push the work back and hand it to a kernel thread for execution-the work will always be executed in the process context. In this way, the task is executed through the work queueCodeThe most important advantage is that the work queue allows rescheduling or even sleep. Compared with the two sides in the front, this is easy to choose. As I said, the first two are not allowed to sleep. This is allowed to sleep, so I understand, right? This means that when you need to obtain a large amount of memory, when you need to obtain a semaphore, it will be very useful when you need to perform a blocking I/O operation (speak first, that's not what I said. It's what I said in the book ).
The working queue subsystem is an interface used to create a kernel thread. The process created through it is responsible for executing tasks in the queue from other parts of the kernel. The kernel threads it creates are called worker threads. Working queues allow your driverProgramCreate a dedicated worker thread to handle the work that needs to be pushed back. However, the work queue subsystem provides a default worker thread to handle these jobs. Therefore, the most basic form of work queue is to convert a job that needs to be pushed and executed to a specific universal thread. The default working thread is event/n. Each processor corresponds to one thread. Here, N represents the processor number. Unless a driver or subsystem must establish a kernel thread of its own, it is best to use the default thread.
1. The thread structure of the job is represented in the following structure:
Struct workqueue_struct {struct cpu_workqueue_struct cpu_wq [nr_cpus];}
Each item in the array corresponds to a CPU of the system. Next, let's look at the core data structure cpu_workqueue_struct in kernel/workqueue. C:
Struct detail {spinlock_t lock; atomic_t nr_queued; struct list_head worklist; Role more_work; Role work_done; struct workqueue_struct * WQ; task_t * thread; struct completion exti ;}
2. indicates the data structure of the job: All worker threads are implemented using common kernel threads, and they all need to execute the worker_thread () function. After initialization, this function executes an endless loop and starts to sleep. When an operation is inserted into the queue, the thread is awakened to execute these operations. When there is no surplus, it will continue to sleep. The work has the work_struct (Linux/workqueue) Structure Representation:
Struct work_struct {unsigned long pending; struct list_head entry; // connect to the void (* func) (void *) of all working linked lists; // processing function void * data; // The Void * wq_data parameter passed to the processing function; struct timer_list timer; // The timer used by Y to delay the working queue}
When a worker thread is awakened, it executes all the work on its linked list. Once the work is completed, it will remove the corresponding work_struct object from the linked list. When the linked list no longer has an object, it will continue to sleep. The core process of the woker_thread function is as follows:
For (;) {set_task_state (current, task_interruptible); add_wait_queue (& cwq-> more_work, & wait); If (list_empty (& cwq-> worklist) Schedule (); elseset_task_state (current, task_running); remove_wait_queue (& cwq-> more_work, & wait); If (! List_empty (& cwq-> worklist) run_workqueue (cwq );}
Analyze the above Code. First, the thread sets itself to sleep and adds itself to the waiting queue. If the work pair column is empty, the thread calls the schedule () function to go to sleep. If the linked list has objects, the thread sets itself as the running state and disconnects from the waiting queue. Then, call run_workqueue () again to execute the push. Now, the problem is entangled in run_workqueue (). It completes the actual push to this point:
While (! List_empty (& cwq-> worklist) {struct work_struct * Work = list_entry (cwq-> worklist. next, struct work_struct, entry); void (* f) (void *) = work-> func; void * Data = work-> data; list_del_init (cwq-> worklist. next); clear_bit (0, & work-> pending); F (data );}
This function cyclically records the work to be processed on each table, and executes the func member function of work_struct on each node on the linked list:
1. When the linked list is not empty, select the next Node object. 2. Obtain the expected function func and its parameter data. 3. Remove the node from the linked list and then clear the pending flag to 0. 4. Call a function. 5. Repeat the execution. |
What the teacher said: It's not a good guy to say no to practice. Now let's continue to see how to use it:
1. First, you can create a static data structure during compilation for the jobs that need to be pushed back:
Declare_work (name, void (* func) (void *), void * data );
Of course, if you want to, you can use a pointer to dynamically create a job at runtime:
Init_work (struct work_struct * Work, void (* func) (void *), void * data );
2. The work queue processing function is executed by a worker thread. Therefore, the function runs in the process context. By default, the corresponding interrupt is allowed and no lock is held. If needed, the function can sleep. It should be noted that although the handler function runs in the process context, it cannot access the user space because the kernel thread does not have the corresponding memory ing in the user space. The function prototype is as follows:
Void work_hander (void * data );
3. schedule the work. After completing the preparations, you can start scheduling. You only need to call schedule_work (& work ). in this way, the work will be scheduled immediately. Once the worker thread on the processor is awakened, It will be executed. Of course, if you do not want to execute quickly, but want to delay the execution for a period of time, use schedule_delay_work (& work, delay); delay is the time cycle to be delayed.
4. Refresh. The queue insertion task is executed when the worker thread is awakened next time. Sometimes, before proceeding to the next step, you must ensure that some operations have been completed. For these reasons, the kernel provides a function used to refresh the specified work queue: void flush_scheduled_work (void); this function will wait until all the objects in the queue are executed before returning. When waiting for all the tasks to be processed to be executed, the function enters the sleep state, so it can only be used in the context of the process. It must be noted that this function does not cancel any delayed execution tasks. Call int cancel_delayed_work (struct work_struct * work) to cancel any pending work related to work_struct.
5. Create a new work queue. The front side said that it is best to use the default thread, but if you insist on using the thread you created, what should you do? In this case, you should create a new job queue and corresponding worker threads. The method is very simple. Use the following function: struct workqueue_struct * create_workqueue (const char * Name ); name is the name of the new kernel thread. In this way, all worker threads will be created (each processor in the system has one) and all preparations before starting the processing work will be completed. After creation, call the following function:
Int queue_work (struct workqueue_struct * WQ, struct work_struct * work); int queue_delayed_work (struct workqueue_struct * WQ, struct work_struct * Work, unsigned long delay );
The two functions are similar to schedule_work () and schedule_delayed_work (). The only difference is that they can operate on specific work queues rather than the default event queues.
Now, the work queue is finished. I will combine the previous article to compare the non-implemented policies of the three floors for later selection.
First, tasklet is implemented based on Soft Interrupt. The two are similar, and the working queue mechanism is completely different from them. It is implemented by kernel threads. Soft interrupts provide the least guarantee for serialization, which requires that interrupt handlers take extra care to ensure the security of shared data, two or more multi-phase soft interrupts of the same type may be executed simultaneously on different processors. If the tested Code itself does a good job of Multi-clue processing and uses single-processor variables completely, Soft Interrupt is a good choice. For applications with strict time requirements and high execution efficiency, the execution speed is the fastest. Otherwise, it is more important to select tasklets. The tasklet interface is simple, and the two tasklets of the same type cannot be executed at the same time, so the implementation is simpler. If you need to postpone the task to the context of the process, you can only select a work queue. If sleep is not required, the soft interrupt and tasklet may be more appropriate. In addition, the maximum overhead is caused by the work queue. Of course, this is relative. In most cases, the work queue can provide sufficient support. In terms of convenience, it is the work queue, tasklets, and Soft Interrupt. When we are driving the three lower half implementations, we need to consider two points: first, is it necessary to have a schedulable entity to execute the work that needs to be pushed back (that is, the need for sleep)? If so, the work queue is the only choice; otherwise, it is best to use tasklet. If performance is the most important thing, soft interruption.
Finally, it is related to the lower half of the table:
Function |
Description |
Void local_bh_disable () |
Disable the Soft Interrupt and tasklet processing of the local processor |
Void local_bh_enable () |
Activate the Soft Interrupt and tasklet processing of the local processor |
These functions may be nested for use-the local_bh_enable () that is finally called will eventually activate the lower half. The function uses preempt_count to maintain a counter for each process. When the counter changes to 0, the lower half can be processed. Because the processing of the lower half has been disabled, local_bh_enable () also needs to check and execute all the existing lower half to be processed.
Okay, this is the end of the lecture. After drawing twice, we mentioned some simultaneous problems during these two times. At this time, there may be a problem of mutually exclusive access to data sharing, this is the kernel synchronization thing. We will talk about it later.