Kernel comparison: improvements in kernel development from 2.4 to 2.6

Source: Internet
Author: User
Tags andrew morton
Kernel comparison: improvements in kernel development from 2.4 to 2.6-general Linux technology-Linux programming and kernel information. For details, refer to the following section. Red Flag 5 is the 2.6 kernel, coming soon
Turn: the long-awaited 2.6 kernel has finally arrived. Paul Larson of IBM Linux Technology Center is secretly focusing on tools, tests, and technologies that make 2.6 The best kernel ever-from correction control and regression testing to defect tracking and list persistence.
After three years of active development, the new 2.6 Linux kernel has been released recently. During this period, some interesting changes have taken place in the development and testing methods of the Linux kernel. At present, the kernel development method is no different in many aspects than three years ago. However, some key changes have improved the overall stability and quality.

Source code management
Historically, there has never been a formal source code management or correction control system for the Linux kernel. In fact, many developers have implemented their own correction controllers, but they do not have an official Linux CVS archive, allowing Linus Torvalds to check and add code, and allowing others to obtain the code. The lack of correction controllers often leads to a "generation gap" between release versions. No one really knows what changes have been added and whether these changes can be well integrated, or what new content is worth looking forward to in the upcoming release. Generally, some problems can be avoided if more developers know those changes just as they know their own changes.

Due to the lack of formal correction controllers and source code management tools, many people propose to use a product called BitKeeper. BitKeeper is a source code control and management system. Many kernel developers have successfully applied it to their own kernel development work. Shortly after the initial release of the 2.5 kernel, Linus Torvalds started to try BitKeeper to determine whether it could meet his needs. Currently, the Linux kernel source code of the main 2.4 and 2.5 kernels is managed by BitKeeper. This may seem irrelevant to most users who may have little or no interest in kernel development. However, in some cases, users can benefit from the changes in the method for developing the Linux kernel due to the use of BitKeeper.

One of the biggest advantages of using BitKeeper is the integration of patches. When multiple patches are applied to the same basic code and some of them affect the same part, the integration problem may occur. A good source code management system can automatically complete some of the more complex work, so that the patch can be integrated more quickly and more patches can be added to the kernel. With the expansion of the Linux kernel developer community, it is very important to modify the Controller to help keep track of all changes. Since everyone can integrate these changes into the main Linux kernel, BitKeeper and other tools are essential to ensure that patches are not forgotten and can be easily integrated and managed.

It is necessary to use a real-time, centralized file library to save the latest updates to the Linux kernel. Each change or patch accepted by the kernel is tracked as a change set. End users and developers can save their own source file archives and use a simple command as needed to update with the latest change set. For developers, this means they can always use the latest code copy. Testers can use these logical Change sets to determine which changes lead to problems and shorten the time required for debugging. Even users who want to use the latest kernel can directly use real-time and centralized archives, because once the components or defect fixes they need are added to the kernel, they can be updated immediately. When the code is integrated into the kernel, any user can provide immediate feedback and defect reports on the code.

Parallel Development
As the Linux kernel grows, it becomes more complex and attracts more developers to focus on specific development aspects of the kernel. Another interesting change in the Linux development method has emerged. During the development of kernel 2.3, in addition to the main kernel tree released by Linus Torvalds, there were some other kernel trees.

During the development of 2.5, the kernel tree experienced explosive growth. Because source code management tools can be used to keep development in parallel, it is possible to achieve partially parallel development. Some development needs to be parallel in order to allow others to perform tests before their changes are accepted. Kernel maintainers who keep their own trees dedicated to specific components and goals, such as memory management, NUMA components, improved scalability, and code for specific architectures, there are also some trees that collect and track the correction of many small defects.

Figure 1. Linux 2.5 development tree


The advantage of this parallel development model is that it allows developers who need to make major changes, or those who have made a large number of similar changes to a specific target to freely develop in a controlled environment, this does not affect the stability of the kernel used by others. After developers finish their work, they can release patches for the current version of the Linux kernel to achieve the changes they have made so far. In this way, testers in the community can easily test these changes and provide feedback. After each part is proven to be stable, those parts can be integrated into the main Linux kernel separately, or even all at the same time.

Test in practical application
In the past, the Linux kernel testing method was centered on the open source code development model. Since the code is published to other developers for review once it is released, there has never been a formal verification cycle similar to other forms of software development. The theoretical basis behind this method is the so-called "Linus law" in "The Cathedral and the Bazaar" (please refer to The reference materials for relevant reference ), the content of this rule is "everyone's eyes are bright ". In other words, a high-intensity review can identify most of the real big issues.

However, in fact, the kernel has a lot of complex relationships. Even if a sufficient review is conducted, many serious defects may still be missed. In addition, once the latest kernel is released, end users can (and often) download and use it. At the 2.4.0 release, many people in the Community proposed more organized tests to ensure the strength of specific tests and code reviews. Organized tests include the use of test plans and repeatability during testing. Using all three methods brings higher code quality than using only two methods at first.

Linux testing project
The first contributor to the organized testing of Linux was the Linux Test Project (LTP ). This project aims to improve the quality of Linux through more organized testing methods. Part of this test project is the development of the automated test suite. The main test suite developed by LTP is also called a Linux test project. When the 2.4.0 kernel is released, the LTP test suite has only about 100 tests. With the development and maturity of Linux 2.4 and 2.5, the LTP test suite is also developing and mature. Currently, more than 2000 tests are included in the Linux test project, and the number is growing!

Code coverage analysis
The new tool is used to provide code coverage analysis for the kernel. Overwrite analysis tells us which lines of code in the kernel are executed during a given test run. More importantly, the overwrite analysis shows which parts of the kernel have not been tested yet. This data is important because it specifies which new tests need to be written to test the kernel so that the kernel can be fully tested.

Kernel regression testing lasting for multiple days
In the 2.5 development cycle, another project used by the Linux test project was to use the LTP test suite to perform regression tests on the Linux kernel for multiple days. BitKeeper is used to create a real-time, centralized archive to obtain Linux kernel snapshots at any time. When BitKeeper and snapshot are not used, the tester has to wait until the kernel is released to start the test. Now, as long as the kernel changes, the tester can perform the test.

Another advantage of using automated tools to perform regression tests that lasted for multiple days is that they were less changed than the previous test. If a new regression defect is found, it is usually easy to detect which change may lead to this defect.

Similarly, because of the latest changes, developers are still impressed-hopefully this will make it easier for them to remember and revise the corresponding code. Perhaps the Linus law should come to the conclusion that some defects are more easily discovered than others, because they are exactly what kernel regression testing has discovered and processed for many days. These tests can be performed daily during the development cycle and before the actual release, so that testers who only focus on the full release version can focus on more serious and time-consuming defects.

Scalable Testing Platform
Another team named Open Source Development Lab (OSDL) has also made significant contributions to Linux testing. Shortly after the 2.4 kernel was released, OSDL created a system called Scalable Test Platform (STP. STP is an automated testing platform that allows developers and testers to run the tests provided by systems on top of OSDL hardware. Developers can even use this system to test their own Kernel patches. The scalable test platform simplifies the test process because STP can build the kernel, set the test, run the test, and collect results. Then obtain the result for in-depth comparison. Many people do not have access to large systems, such as SMP machines with 8 processors. With STPS, anyone can run a test on a large system like this (STPs) this is another benefit.

Tracking Defects
One of the biggest improvements to the Linux kernel's organized testing since the 2.4 release is defect tracking. In the past, defects found in the Linux kernel were reported to the Linux kernel email list and to the email list of specific components or systems, or directly report to the person who maintains the part of the Code that discovers the defect. With the increase in the number of users who develop and test Linux, the shortcomings of this system will soon be exposed. In the past, defects were often missed, forgotten, or ignored unless people were surprisingly able to maintain their reports.

Now, OSDL has installed a defect tracking system (see the link in references) to report and track Linux kernel defects. The system is configured so that when a component defect is reported, the maintainer of the component will be notified. The maintainer can either accept and fix the defect or re-specify the defect (if it is ultimately determined that it is actually another part of the kernel), it can also eliminate it (if it is ultimately determined that it is not a real defect, for example, the system with incorrect configuration ). Defects reported to the Mail list are also at risk of loss because more and more emails are routed to that list. However, in the defect Tracing System, there is always a record of each defect and its current status.

Large amount of information
In addition to these automated information management methods, different members of the open source code community also collect and track amazing amounts of information during the development of the 2.6 Linux kernel in the future.

For example, a status list is created on the Kernel Newbies site to keep track of new Kernel components. This list contains entries sorted by status. If they are completed, it indicates the kernel in which they are included. If they are not completed, it indicates how long it will take. Many links on the list point to the Web site of a large project, or when the entry is small, the link points to a copy of the e-mail information that explains the corresponding part.

Kernel version history
Now many of us are familiar with the version number system of the Linux kernel, but Andries Brouwer reminds us how it is actually irregular.

The first public version of Linux is October 1991 in 0.02. Two months later, in December 1991, Linus released version 0.11, the first independent kernel that can be used without relying on Minix.

One month after the release of version 0.12, the version number jumped to version March on April 9, 0.95, reflecting that the system is becoming mature. Not only that, but two years later, that is, March 1994, the milestone 1.0.0 was completed.

Start to use the two "path" numbering method to mark the development of the kernel. The kernel with an even number (such as 1.0, 2.2, 2.4, and 2.6) is stable and the "product" model. At the same time, kernel versions (1.1 and 2.3) with odd numbers are cutting-edge or "developing" kernels. Recently, a stable kernel was released and the development of the new kernel began in a few months. However, 2.5 of development work starts several ten months after 2.4 is completed.

So when can we expect 2.7? This is not easy to say, but there is a thread to be discussed in KernelTrap.

Before that, you can read the Ragib Hasan article to learn more about Linux history.


At the same time, the "post-halloween document" tells users what to expect from the upcoming 2.6 kernel (see the link in references ). Most of the discussions in the post-halloween document are the major changes that users need to pay attention to and the system tools to be updated (to use them ). Users who care about this information are mainly Linux publishers who want to know in advance what content is in the 2.6 kernel, as well as end users, this allows them to determine whether there are programs to be upgraded to take advantage of the new components.

The Kernel Janitors project is maintained (and is actually still being maintained) with a list of minor defects and solutions to be fixed. Most of the solutions to these defects are caused by the need to change a lot of code when making major patches to the kernel. For example, in some cases, the device driver may be affected. Those who are recently engaged in kernel development can select entries in the list for their work at the beginning, so that they can learn how to write kernel code through small projects and have the opportunity to contribute to the community.

Also, in another prerelease project, John Cherry tracked the errors and warnings found during compilation of each released kernel version. These compilation statistics continue to decline over time, and these results are published in the form of a system to make the progress clear. In many cases, part of these warnings and error messages can be used like the list of Kernel Janitors, because compilation errors are usually caused by small defects and need to be repaired.

Finally, there is Andrew Morton's "must-fix" list. As he has been selected as the maintainer after the 2.6 kernel release, he uses his privileges to list issues that he sees as the most urgent solution before the final 2.6 kernel release. The must-fix list contains defects in the kernel Bugzilla system, components to be completed, and other known problems. Failure to resolve these problems will impede the 2.6 release. This information can help specify the steps required before the new kernel is released. For those who are concerned about when the 10 thousand kernel is expected to be released, it also provides valuable information.

Since the release of the 2.6 kernel at the end of last year, some of these materials have obviously not been maintained. Other related work is not completed after the major version is released, and later updates are required. Interestingly, we can see what has been mentioned again and what innovations have been made, and once again we are close to a major release version.

Conclusion
When most people consider a new stable version of the kernel, the first problem is usually "Is there anything new in this version ?" In fact, in addition to some new features and fixes, there is also a process of continuous improvement behind the scenes over time.

In the Linux community, open source code development is booming. The connection between the Linux kernel and other coders is loose, which allows the team to adapt to changes. In many aspects, compared to the many single improvements and defects that have been completed, linux development and testing methods-especially those methods improved over time-have a far-reaching impact on the reliability of the new kernel.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.