The Linux2.6 kernel in developers' eyes

Source: Internet
Author: User
In the eyes of developers, the Linux 2.6 kernel-general Linux technology-Linux programming and kernel information is described below. The launch of the Linux 2.6 kernel will play an immeasurable role in consolidating the mainstream position of Linux in the server field or promoting its promotion process in the desktop operating system field. Developers should examine the new changes brought about by the Linux 2.6 kernel for application development from a deeper perspective.

The kernel is the core of an operating system. It provides the most basic functions of the operating system, including process scheduling, disk management, device management, and network management, the advantages and disadvantages of its implementation algorithms or methods will directly affect the performance of the entire operating system, and even the performance of applications developed on the operating system in the future.

Which features of the new kernel need special attention from developers?

Memory

1. preemptible memory

Like most other operating systems, a Linux kernel earlier than version 2.6 does not allow scheduling of a process called and running by the system. This means that once a task is being executed in a system call, the task controls the processor until the end of the System Call, regardless of the length of time it takes to use the processor. This design is simple, but in many cases it may lead to delays in waiting for the completion of system calls.

The memory in the 2.6 kernel can be preemptible. This will significantly reduce the latency of similar applications, such as user interactive applications and multimedia applications. This feature is particularly useful for real-time systems and embedded systems.

Preemptible memory enables the Linux 2.6 kernel to have better response capabilities than the 2.4 kernel in some time-sensitive events. To implement preemptible memory, the code in the Linux 2.6 kernel is set to a preemptible point, which means that the scheduler will stop running processes and execute processes with higher priority. During system calling, the Linux 2.6 kernel regularly checks the preemption point to avoid unreasonable latency. During the check, the scheduling process may abort the current process to run another process.

Not all kernel code segments can be preemptible. You can lock the key part of the kernel code and do not allow it to be preemptible. Locking ensures that the data structure and status of each CPU are always protected without being preemptible.

2. Introduce the memory pool mechanism

The development process of the 2.6 kernel introduces a memory pool to ensure uninterrupted memory allocation. The idea is to allocate a memory pool in advance and keep it to the desired time.

3. Improved virtual memory

The 2.6 kernel integrates Rik van Riel's r-map (Reverse Mapping, Reverse ing) technology, which will significantly improve the performance under a certain degree of load in the virtual environment.

When the Linux kernel runs in the virtual memory mode, each virtual page corresponds to a physical page of the corresponding system memory. The address translation between the virtual page and the physical page is completed by the hardware page table. However, this "virtual to physical" Page ing is not always one-to-one, and multiple virtual pages may point to the same physical page. In this case, to release a specific physical page, the kernel must traverse all process page table records to find references pointing to this physical page, this physical page can be released only when the number of references reaches 0. When the load is high, this will make the virtual memory very slow.

The Reverse Address ing patch introduces a data structure called pte_chain on the structure page to solve this problem.

However, this method has a pointer overhead problem. Each structure page in the system must have an additional structure for pte_chain. A mb memory system has 64 K physical pages, which requires 64KB * (sizeof (struct pte_chain) memory to be allocated for the pte_chain structure, which is a considerable number.

Even so, the performance of r-map, especially the high-load high-end systems, is significantly improved compared with the Virtual Memory System of the 2.4 kernel.

4. Improved shared memory

For embedded system developers, embedded systems are often a device with multiple processors, such as telecommunications networks or large storage systems. Whether it is a balanced or loosely connected multi-processor, it is generally shared memory. Balanced Multi-process design enables all processors to have equal access to memory, and the decisive factor limiting memory usage is the efficiency of processes.

The Linux 2.6 kernel provides a different way for multiple programs, namely, NUMA (Non Uniform Memory Access ). In this method, the memory and the processor are connected to each other, but for each processor, some memory is "disabled", and some memory is "Farther ". This means that when memory competition arises, the "closer" processor has a higher right to use the nearest memory. The 2.6 kernel provides a set of functions to define the topological relationship between the memory and the processor. The scheduler can use this information to allocate local memory for the task. This will reduce the bottleneck caused by memory competition and increase the throughput.

Driver

1. Improvement of Interrupt Routine

The interrupt handler of the 2.6 kernel has undergone many internal changes, but most of them have no impact on common driver developers. However, there are still some important changes that will affect driver developers. In kernel 2.6, if the driver needs to interrupt a device, IRQ_HANDLED is returned; otherwise, IRQ_NONE is returned. This helps the IRQ layer of the kernel to clearly identify which driver is processing the specific interrupt.

If an interrupt request keeps coming and the handler of the device is not registered, the kernel will ignore the interrupt from the device.

2. Driver porting

Compared with the 2.4 kernel, the 2.6 kernel improves the kernel compilation system to achieve faster compilation speed. The Compiling System of the kernel version 2.6 is added with the improved graphical tool make xconfig (Qt library required) and make gconfig (Gtk library required ).

The kernel module loader is also fully implemented in the 2.6 kernel, making the module compilation mechanism quite different from the 2.4 kernel. In the original version 2.4 kernel, the module tool cannot be used to load or uninstall the module of version 2.6 kernel. A new set of module tools are required to load and uninstall the module. This new module loading tool will try to minimize the conflict of unmounted modules when a device is still in use, instead, uninstall the modules after they are confirmed that no device is in use. One of the reasons for this conflict is that the module's use of counting is controlled by the module code itself. In the kernel version 2.6, the module no longer needs to add or subtract the reference count, which will be performed outside the module code.

Process Management

1. New scheduler algorithm

The 2.6 version of Linux kernel uses a new scheduler algorithm called the O (1) algorithm, which runs very well under high loads, and can be well expanded when there are multiple processors.

In the 2.4 scheduler, the time slice recalculation algorithm requires that all processes use up their time slice before the new time slice is recalculated. In a multi-processor system, when processes use up their time slices, they have to wait for re-calculation to get a new time slice, resulting in most of the processors being idle, affecting SMP efficiency. In addition, when the idle processor starts to execute those waiting processes that are not used up yet, it will cause the process to "jump" between the processors ". When a high-priority process or interactive process jumps, the performance of the entire system will be affected.

The new scheduler solves the problem by distributing time slices based on each CPU and canceling global synchronization and recalculation cycles. The scheduler uses two priority arrays, namely the activity array and the expired array, which can be accessed through pointers. The activity array contains all tasks mapped to a CPU that have not been exhausted. The expired array contains an ordered list of all tasks that have been used up by time slice. If the time slice of all the active tasks is used up, the pointers pointing to the two arrays are interchangeable, and the expired array that is preparing to run the task becomes the active array, the empty activity array becomes a new array containing expired tasks. The index of the array is stored in a 64-Bit Bitmap, so it is easy to find the task with the highest priority.

The new scheduler has the following advantages:

◆ SMP efficiency all processors will work if there is work to be completed.

◆ Waiting for a process without a process requires a long wait for the processor, and no process will occupy a lot of CPU time without reason.

◆ The SMP process ing process only maps to one CPU and does not jump between CPUs.

◆ The priority of non-important tasks is low, and vice versa.

◆ The Server Load balancer scheduler will reduce the priority of processes that exceed the processor's load capacity.

◆ Interactive performance even under high loads, the system takes a long time to respond to mouse clicks or keyboard input.

2. efficient scheduling program

In the kernel of version 2.6, the process scheduling is re-compiled, and the scheduling program does not need to scan all the tasks every time, instead, a task is put into a queue named "current" when it becomes ready. When the process scheduler is running, only the most advantageous tasks in the queue are selected for execution. In this way, scheduling can be completed at a constant time. When a task is executed, it gets a period of time, or gets the CPU usage right for a period of time before it is transferred to another thread. After the time period is used up, the task is transferred to another queue named "expired. In this queue, tasks are sorted based on their priorities.

In a sense, all tasks in the "current" queue will be executed and transferred to the "expired" queue. When this happens, the queue will switch. The original "expired" queue will become the "current" queue, and the empty "current" queue will become the "expired" queue. Because the tasks have been arranged in the new "current" queue, the scheduler now uses a simple queue algorithm, that is, it always takes the first task of the current queue for execution. This new process is much faster than the old one.

3. New synchronization measures

Multi-process applications sometimes need to share some resources, such as shared memory or devices. To avoid competition, programmers use a function named mutex to ensure that only one task is using resources at a time. So far, Linux has implemented mutex through a system call included in the kernel, and the system call determines whether a thread is waiting or continues to execute.

Linux 2.6 kernel supports FUSM (Fast User-Space Mutex ). This new function checks the user's space to see if there is any waiting condition, and calls the system only when the thread needs to wait. In this way, when you do not need to wait, it will avoid unnecessary system calls to save time. This function also uses Priority Scheduling to determine which thread can be executed in the event of competition.

To sum up, the 2.6 kernel provides developers with faster and more convenient interfaces to develop faster and more efficient applications. At the same time, the 2.6 kernel also solves some inconvenience caused by kernel bottlenecks in the 2.4 kernel, such as real-time problems. Linux will continue to develop, which will also promote the continuous development and progress of application development.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.