Time delay
How to measure the time difference, compare times
How to get the current time
How to defer an operation for a specified period of time
How to schedule an asynchronous function to execute after a specified time
Measure time Difference
The kernel tracks the time stream through timer interrupts.
Clock interrupts are generated by the system timing hardware at periodic intervals, which are set by the kernel according to the Hz value, and Hz is an architecture-related constant.
Each time a clock interrupt occurs, the value of the internal counter of the kernel increases by one.
The value of this counter is initialized to 0 at system boot, and its value is the number of clock ticks since the last operating system boot.
Driver developers typically access the jiffies variable.
When comparing cached values and current values, you should use the following macro:
#include <linux/jiffies.h>
int Time_after (unsigned long A, unsigned long B);
int Time_before (unsigned long A, unsigned long B);
int time_after_eq (unsigned long A, unsigned long B);
int time_before_eq (unsigned long A, unsigned long B);
Time representation method for user space (using struct timeval and struct timespec)
The kernel provides the following helper functions for completing jiffies values and conversions between these structures:
#include <linux/time.h>
unsigned long timespec_to_jiffies (struct timespec *value);
void Jiffies_to_timespec (unsigned long jiffies, struct timespec *value);
unsigned long timeval_to_jiffies (struct timeval *value);
void Jiffies_to_timeval (unsigned long jiffies, struct timeval *value);
#include <linux/jiffies.h>
U64 get_jiffies_64 (void);
The actual clock frequency is almost completely invisible to the user space.
When a user-space program contains param.h, the Hz macro is always extended to 100.
For the user, if you want to know the Hz value of the timer interrupt, it can only be obtained by/proc/interrupts.
For example, the exact Hz value of the kernel is obtained by dividing the count value obtained by/proc/interrupts by the system run time reported by the/proc/uptime file.
The most famous counter register is the TSC (timestamp counter, timestamp counter), which is a 64-bit register that records the number of CPU ticks, which can be read from both the kernel space and the user space.
Get current time
The kernel generally obtains the current time through the Jiffies value, which represents the current time interval since the last system boot.
Its lifetime is limited to the operating period of the system (uptime).
The driver can use the current value of jiffies to calculate the time interval between different events.
The real-world time processing is usually best left to the user space, and the C function library provides us with better support.
The kernel provides a function to convert the wall clock time to a jiffies value:
#include <linux/time.h>
unsigned long mktime (unsigned int year, unsigned int mon, unsigned int day, unsigned int hour, unsigned int min, unsigned int sec);
Direct processing of wall clock time means that a strategy is being implemented.
<linux/time.h> exported the Do_gettimeofday function, which fills a pointer variable that points to a struct timeval with a second or subtle value.
Gettimeofday system calls are in the same variable.
Do_gettimeofday's prototype:
#include <linux/time.h>
void Do_gettimeofday (struct timeval *tv);
The current time can also be obtained by xtime variables (type struct timespec), but with less precision.
The kernel provides an auxiliary function Current_kernel_time:
#include <linux/time.h>
struct Timespec current_kernel_time (void);
Current_kernel_time is expressed in nanosecond precision, but only the resolution of clock ticks;
Do_gettimeofday continues to report back time, but will never be later than the next timer tick.
Deferred execution
The active release of the CPU when no CPU is required, which can be achieved by invoking the schedule function,
Timeout
The best way to implement latency should be to have the kernel do the work for us.
There are two constructs based on the Jiffies timeout, which relies on whether the driver is waiting for another event.
If the driver uses a wait queue to wait for something else, and we want to run it in a specific time period, you can use the Wait_event_timeout or Wait_event_interruptible_timeout function:
#include <linux/wait.h>
Long Wait_event_timeout (wait_queue_head_t Q, Conditon, long timeout);
Long Wait_event_interruptible_timeout (wait_queue_head_t Q, Conditon, long timeout);
The above function sleeps on a given waiting queue, but returns when the timeout (denoted by jiffies) expires.
The timeout value here represents the Jiffies value to wait for.
When using Wait_event_timeout and wait_event_interruptible_timout in a hardware driver, the continuation of execution can be obtained in the following two ways:
Other people call wake_up on the wait queue, or the timeout expires.
To accommodate this special case (delay without waiting for a particular event), the kernel provides the Schedule_timeout function, which avoids declaring and using redundant wait queue headers:
#include <linux/sched.h>
Signed Long Schedule_timeout (signed long timeout);
Timeout is the delay time expressed in jiffies, and the normal return value is 0 unless the function returns (such as responding to a signal) before the given timeout value expires.
Schedult_timeout requires the caller to first set the state of the current process.
The typical calling code is as follows:
Set_current_state (task_interruptible);
Schedule_timeout (delay);
Wait_event_interruptible_timeout is internally dependent on the Schedule_timeout function.
The scheduler will only run this process if the timeout expires and its status becomes task_running.
If you want to implement non-disruptive latencies, you can use task_uninterruptible.
If you forget to change the state of the current process, the call to Schedule_timeout and the call to schedule will not really work with the timer that the kernel constructs for us.
Short delay
The kernel functions of Ndelay, Udelay, and Mdelay are good for short-delay tasks that delay a specified number of nanoseconds, subtleties, and milliseconds, respectively.
#include <linux/delay.h>
void Ndelay (unsigned long nsecs);
void Udelay (unsigned long nsecs);
void Mdelay (unsigned long nsecs);
The implementation of these functions is contained in <asm/delay.h>, and its implementation is related to the specific architecture.
These three delay functions are busy wait functions.
Kernel timers
Can be used to schedule a function at a specific point in the future (clock tick-based), which can be used to accomplish many tasks.
The kernel itself also uses timers in many cases, including in the implementation of Schedule_timeout.
A kernel timer is a
Data Structure, which tells the kernel to execute a user-defined function using user-defined parameters at a user-defined point in time. Its implementation is located in the <linux/timer.h> and kerneltimer.c files.
Functions that are scheduled to run are almost certainly not run when the process that registers these functions is executing. These functions run asynchronously.
Kernel timers are often
as a result of "software interruption"and run.
If you are outside the process context (for example, in an interrupt context), you must follow these rules:
Access to user space is not allowed;
The current pointer does not have any meaning;
Cannot perform hibernation or dispatch, atomic code cannot call schedule or wait_event, nor can it invoke any function that might cause hibernation. (Kmalloc (..., gfp_kernel), semaphore)
The function in_interrupt () can be used to determine whether the interrupt context is running. Returns a value other than 0, whether it is a hardware interrupt or a software outage.
function in_atomic (), the return value is also a non-0 value when the dispatch is not allowed.
Scenarios where scheduling is not allowed include hardware and software interrupt contexts and any point in time that has a spin lock.
Another important feature of the kernel timer is that the task can register itself later and rerun at a later time. This possibility is due to the fact that each timer_list structure is moved from the active timer list before it is run, so that it can be linked to other linked lists.
In an SMP system, the timer function is executed by the same CPU that registers it, so that the local domain (locality) of the cache can be obtained as much as possible. A timer clock that registers itself will run on the same CPU.
Timers can also be a potential source of race, and any data structures accessed through a timer function should be accessed for concurrent access.
Timing API
The kernel provides a set of functions for the driver to declare, register, and delete kernel timers.
#include <linux/timer.h>
struct timer_list{
unsigned long expires;
void (*function) (unsigned long);
unsigned long data;
};
void Init_timer (struct timer_list *timer);
struct Timer_list timer_initializer (_function, _expires, _data);
void Add_timer (struct timer_list *timer);
int Del_timer (struct timer_list *timer);
The parameter Expires field represents the Jiffies value that expects the timer to execute, and when it reaches the jiffies value, the function function is called and data is passed as the parameter.
If you need to pass multiple data items through this parameter, you can bundle the data items into a single data structure and then cast the pointer of the structure to a unsigned long pass-through.
Implementation of the kernel timer
Whenever the kernel code registers a timer, its operation is eventually performed by Internal_add_timer (defined in kernel/timer.c), which in turn adds a new timer to the timer bidirectional list in the Cascade table associated with the current CPU.
Cascading tables work as follows: If the timer expires in the next 0~255 jiffies, the timer is added to one of the 256 linked lists, which are dedicated to short-term timers.
When the __run_timers is fired, it executes all the pending timers on the current timer tick. If Jiffies is currently a multiple of 256, the function will also re-hash the next-level timer list into 256 short-term lists, and may cascade other levels of timers based on the bit partitioning of the jiffies above.
The __run_timers function runs in an atomic context.
The timer expires at the correct time, even if it is not running a preemptive kernel, and the CPU is busy with kernel space.
Although the system seems to be busy waiting for the system to call the entire lock, the kernel timer still works well.
However, the kernel timers are affected by jitter and by hardware interrupts, other timers, and asynchronous tasks.
So it is not suitable for production systems in industrial environments, and for such tasks, some kind of real-time kernel expansion is needed.
Tasklet
(small task Mechanism)
This mechanism is used extensively in interrupt management.
and the kernel timer
Same point:
Clocks run during interrupts and are always run on the same CPU on which they are dispatched, and all receive a unsigned long parameter.
Also executes in atomic mode in the software interrupt context.
Different points:
Tasklet cannot be required to execute at a given time.
Dispatching a tasklet indicates that we just want the kernel to select a later time to execute the given function.
Interrupt processing routines must manage hardware interrupts as quickly as possible, while most data management can be safely deferred to a later time.
Software interrupts are a kernel mechanism for performing certain asynchronous tasks while opening hardware interrupts.
Tasklet to
Data StructureThe form exists and must be initialized before it is used.
The initialization of Tasklet can be done by invoking a specific function or declaring the structure with a specific macro:
#include <linux/interrupt.h>
struct tasklet_struct{
void (*func) (unsigned long);
unsigned long data;
};
void Tasklet_init (struct tasklet_struct *t, void (*func) (unsigned long), unsigned long data);
Declare_tasklet (Name, func, data);
Declare_tasklet_disabled (Name, func, data);
Features of the Tasklet:
A tasklet can be banned or restarted at a later time, and the Tasklet will be executed only if the number of times and the number of prohibitions are enabled;
Similar to timers, Tasklet can register itself;
Tasklet can be scheduled to execute at the usual priority or high priority, and high-priority Tasklet will be executed first;
If the system load is not heavy, then Tasklet will be executed immediately, but never later than the next timer tick;
A tasklet can and other tasklet concurrency, but for itself is strictly serial processing, that is, the same tasklet will never run concurrently on multiple processors;
Tasklet will always run on the same CPU that schedules itself.
The kernel provides a set of KSOFTIRQ kernel threads for each CPU to run software interrupt processing routines, such as the Tasklet_action function.
Tasklet Related kernel interfaces:
/* This function disables the specified tasklet;
The tasklet can still be dispatched with Tasklet_schedule, but its execution is deferred until the Tasklet is re-enabled;
If Tasklet is currently running, the function will enter a busy wait until Tasklet exits;
After calling Tasklet_disable, you can be confident that the Tasklet will not run anywhere on the system. */
void tasklet_disable (struct tasklet_struct *t);
/* Disables the specified tasklet, but does not wait for any running tasklet to exit.
When the function returns, Tasklet is disabled and is not dispatched again until it is re-enabled.
When the function returns, the specified tasklet may still be executed on the other CPU. */
void Tasklet_disable_nosync (struct tasklet_struct *t);
/* Enable a previously disabled tasklet.
If the Tasklet has been dispatched, it will soon run;
The call to Tasklet_enable must match each call to tasklet_disable;
The kernel holds a "Disable count" for each tasklet. */
void tasklet_enable (struct tasklet_struct *t);
/* Schedule Execution of the specified tasklet.
If a tasklet is dispatched again before the run opportunity is obtained, the Tasklet will only run once.
If the Tasklet runtime is dispatched, it will run again after completion. This ensures that other events that occur while the event is being processed are also received and noted that this behavior also allows Tasklet to reschedule itself. */
void Tasklet_schedule (struct tasklet_struct *t);
/* Schedule The specified Tasklet to be executed at high priority level.
When the software interrupt processing routine runs, it handles high-priority tasklet before processing other software interrupt tasks. */
void Tasklet_hi_schedule (struct tasklet_struct *t);
/* This function ensures that the specified Tasklet is not scheduled to run again;
Called when the device is to be closed or the module is to be removed;
If the Tasklet is being scheduled to execute, the function waits for it to exit;
If Tasklet re-dispatches itself, you should avoid completing the rescheduling before calling Tasklet_kill, which is similar to Del_timer_sync's processing. */
void tasklet_disable (struct tasklet_struct *t);
The implementation of Tasklet is in kernel/softirq.c.
There are two (usually priority and high priority) Tasklet linked lists, which are declared as PER-CPU data structures and use CPU-related mechanisms like kernel timers.
Work queue
Work queues (Workqueue) are similar to Tasklet, which allow kernel code to request that a function be called at a later time.
Difference:
Tasklet in
Software Interrupt Context, all Tasklet code must be atomic. The work queue function is in a
Special Kernel processesRunning in the upper and lower context, the work queue function can hibernate.
Tasklet always runs on the same processor that was initially committed, but this is the default for the work queue.
The kernel code can request the execution of a work queue function to delay a given time interval.
The key difference between the two is that the Tasklet is executed very quickly within a short period of time and is executed in atomic mode, and the work queue function can have a longer delay and does not have to be atomized.
The Task Force column has a struct workqueue_struct type, which is defined in <linux/workqueue.h>.
Before you use it, you must explicitly create a work queue:
struct workqueue_struct *create_workqueue (const char *name);
struct workqueue_struct *create_singlethread_workqueue (const char *name);
Each work queue has one or more dedicated processes ("Kernel threads") that run functions that are submitted to the queue.
If you use Create_workqueue, the kernel creates a dedicated thread for that work queue on each processor in the system.
To submit a task to a work queue, you need to populate a work_struct structure, which is done at compile time with the following macro:
Declare_work (name, void (*function) (void *), void *data);
If you want to construct the WORK_STRUCT structure at run time, you can use the following two macros:
Init_work (struct work_struct *work, void (*function) (void *), void *data);
Prepare_work (struct work_struct *work, void (*function) (void *), void *data);
If you want to submit your work to a work queue, you can use:
int queue_work (struct workqueue_struct *queue, struct work_struct *work);
int queue_delayed_work (struct workqueue_struct *queue, struct work_struct *work, unsigned long delay);
If it is successfully added to the queue, the return value is 1. A return value of nonzero means that the given work_struct structure is already waiting in the queue.
At some point in the future, the work function is called and passed in the given data value.
The function cannot access the user space because it is running on a kernel thread, and the thread does not have the corresponding user space to access.
If you want to cancel an entry for a pending task force, you can call:
int cancel_delayed_work (struct work_struct *work);
To be absolutely sure that after Cancel_delayed_work returns 0, the work function does not run anywhere in the system, then the following function should be called:
void Flush_workqueue (struct workqueue_struct *queue);
After the function returns, any work function that was committed before the call does not run anywhere on the system.
After you have finished using the work queue, you can call the following function to release the related resources:
void Destroy_workqueue (struct work_queue_struct *queue);
Shared queues
The device driver can use the shared default work queue provided by the kernel.
Initializing the WORK_STRUCT structure
static struct work_struct jiq_work;
Init_work (&jiq_work, Jiq_print_wq, &jiq_data);
int schedule_work (struct work_struct *work);
If a user reads a deferred device, the work function will resubmit itself to the work queue in deferred mode, using the Schedule_delayed_work function:
int schedule_delayed_work (struct work_struct *work, unsigned long delay);
If you need to cancel a work entry that has been submitted to a shared queue, you can use the Cancel_delayed_work function. However, another function is required to refresh the shared work queue:
void flush_scheduled_work (void);
Time, latency (Linux device drivers)