Analysis of time programming and realization principle in Linux (iii) the work of Linux kernel

Source: Internet
Author: User
Tags current time require linux

Introduction

The work of the time system requires the collaboration of hardware and software as well as the operating system, and in the last part we have seen that most of the time functions rely on kernel system calls, and GlibC only makes one request forwarding. So you have to drill down into the kernel code to get more details.

The normal operation of the kernel itself also depends on the clock system. Linux is a typical time-sharing system, the CPU time is divided into multiple time slices, which is the basis of multitasking implementation. The Linux kernel relies on tick, a clock interrupt for time-sharing.

To meet the requirements of the application and kernel itself, the kernel time system must provide the following three basic features:

Provide system tick interrupt (drive Scheduler, realize time-sharing)

Maintain system time

Maintenance Software Timer

The current Linux kernel version is 3.8, its time system is more complex, and the complex reasons come from several aspects:

First Linux to support different hardware architectures and clock circuits, Linux is a general-purpose operating system, supporting the diversity of the platform to lead to a time system must contain a variety of hardware processing and driving code.

Second, early Linux clock implementations adopt a low precision clock frame (MS level), with the development of hardware and the development of software requirements, more and more calls to improve the clock accuracy (ns level); After several years of effort, it was found that it was not possible to extend the high-precision clock gracefully on an early low precision clock architecture. Finally, the kernel uses two separate sets of code implementations, corresponding to high-precision and low precision clocks. This increases the complexity of the code.

Finally, demand from power management further increases the complexity of the time system. Linux is increasingly being applied to embedded devices, and the demand for electricity is increasing. When the system idle, the CPU into the power-saving mode, at this time the constant clock interrupt will frequently interrupt the CPU sleep state, the new time system must be changed to meet this demand, when the system does not perform the task, stop the clock interrupt, until the task needs to be performed to restore the clock.

The above points have caused the complexity of the kernel time system. But the Linux kernel is not so complicated from the start, so let's start from the beginning.

The early Linux time system

Before Linux 2.6.16, the kernel only supports low precision clocks. The kernel revolves around the tick clock to implement all the time-related functions. Tick is a regularly triggered interruption, generally provided by PIT (programmable Interrupt Timer), probably 10ms triggered once (100HZ), with very low precision. How does the kernel implement three basic functions under this simple architecture?

The first major function: Provide tick interruption.

Taking x86 as an example, the system initializes a device that provides timed interrupts (such as programmable Interrupt Timer, PIT), configures the appropriate interrupt handling IRQ, and the corresponding processing routines. When the hardware device initialization is complete, it starts to interrupt periodically, which is tick. Very simply and clearly, it is important to emphasize that the tick interrupt is a real interrupt that is directly generated by the hardware, which will change in the current kernel implementation, as we described in part fourth.

Second major function: Maintain system time.

The RTC (real time Clock) has a separate battery power that always keeps the system times. When the Linux system initializes, the RTC is read and the current time value is obtained.

The read RTC is an architecture-related operation, defined in ARCH\X86\KERNEL\TIME.C for the x86 machine. You can see that the final implementation function is Mach_get_cmos_time, which directly reads the RTC CMOS chip to get the current time. As mentioned earlier, the RTC chip is generally able to read the date and time information by IO operation directly. After the time value stored in the RTC is obtained, the kernel calls Mktime () to convert the RTC value to a time value of Epoch (both 1970 New Year's Day). Linux will no longer read the hardware RTC until the next reboot.

Although the kernel can also read the RTC every time it is required to get the current time, this is an IO call, with poor performance. In fact, after getting the current time, the Linux system will immediately start tick interrupts. Thereafter, in each of the clock interrupt processing functions, Linux updates the current time value and is saved in the global variable Xtime. For example, the clock interrupt cycle of 10ms, then each interrupt generation, will xtime plus 10ms.

When an application needs to obtain the current time through a temporal system call, the kernel only needs to read the Xtime from memory and return it. In this way, the Linux kernel provides the second largest function to maintain system time.

The third major function: Software timer

There is a drawback to the hardware circuit that provides programmable timing interrupts, that is, the number of timers that can be configured at the same time is limited. But modern Linux systems require a lot of timers: the kernel itself needs to use a timer, such as kernel-driven operations that require a given amount of time to wait, or TCP network protocol stack code will require a lot of timer; The kernel also needs to provide system calls to support Setitimer and POSIX Timer This means that the software timer needs more than the number of timers the hardware can provide, and the kernel must rely on the software timer.

A simple software timer can be implemented by a timer list. When you need to add a new timer, simply add a new timer element to a global list. Each time a tick interrupt arrives, traverse the list and trigger all expired Timer. But this approach lacks scalability: when the number of Timer increases, the cost of traversing the list increases linearly. If you sort the linked list, instead of traversing the list in tick interrupts, you only need to view the header of the chain, which is O (1), but this causes the time complexity of creating a new Timer to become O (n), because the time complexity of inserting an element into the sorted list is O (n). These are feasible but scalable algorithms. When Linux has not been widely applied to the server, the number of timer in the system is not much, so the implementation based on the linked list is feasible.

But as Linux starts to serve as a server operating system to support network applications, the number of timer needs increases dramatically. Some TCP implementations require 2 timer for each connection, as well as a timer for multimedia applications, in which case the number of timer reaches the level of scalability that needs to be considered.

Timer's three actions: Add (add_timer), delete (Del_timer), and expiration processing (tick interrupts) have a significant impact on timer precision and latency, and timer precision and latency have a significant impact on the application. For example, a add_timer delay is too high for a high-speed TCP network protocol to be implemented. To this end, from Linux2.4, the kernel uses an algorithm called the time Wheel to ensure that Add_timer (), Del_timer (), and expire processing operations have a time complexity of O (1).

A brief introduction to TIME Wheel algorithm

Time Wheel algorithm is a software timer algorithm, proposed by computer scientist George Varghese, in NetBSD (an operating system) to implement and replace the early kernel of the callout timer implementation.

The original time wheel is shown in the following figure.

Figure 1. The original Time Wheel

The wheels in the diagram above have 8 bucket, each of which represents a point in time for the future. We can define each bucket represents a second, then bucket [1] represents the point of time is "1 seconds later," bucket [8] represents the point of time "8 seconds later." Bucket holds a timer list, and all the timer in the list will be triggered at the point in time that the Bucket represents. The middle pointer is called cursor. Such a time cycle works as follows:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.