How is the Linux2.6.x kernel improved?

Source: Internet
Author: User
After three years of active development, the new 26Linux kernel has been released recently. during this period, some interesting changes have taken place in the development and testing methods of the Linux kernel. Currently, the kernel developer

After three years of active development, the new 2.6Linux kernel has recently been released. during this period, some interesting changes have taken place in the development and testing methods of the Linux kernel. At present, the kernel development method is no different in many aspects than three years ago. However, some key changes have improved the overall stability and quality.

Source code management

Historically, there has never been a formal source code management or correction control system for Linux kernel. In fact, many developers have implemented their own correction controllers, but they do not have an official LinuxCVS Archive, allowing LinusTorvalds to check and add code, and allow others to obtain the code. The lack of correction controllers often leads to a "generation gap" between release versions. no one really knows what changes have been added and whether these changes can be well integrated, or what new content is worth looking forward to in the upcoming release. Generally, some problems can be avoided if more developers know those changes just as they know their own changes.

Due to the lack of formal correction controllers and source code management tools, many people propose to use a product called BitKeeper. BitKeeper is a source code control and management system. many kernel developers have successfully applied it to their own kernel development work. Shortly after the initial release of the 2.5 kernel, LinusTorvalds began to try BitKeeper to determine whether it could meet his needs. Currently, the main Linux kernel source code of kernel 2.4 and kernel 2.5 is managed by BitKeeper. This may seem irrelevant to most users who may have little or no interest in kernel development. However, in some cases, users can benefit from the changes in the method for developing the Linux kernel due to the use of BitKeeper.

One of the biggest advantages of using BitKeeper is the integration of patches. When multiple patches are applied to the same basic code and some of them affect the same part, the integration problem may occur. A good source code management system can automatically complete some of the more complex work, so that the patch can be integrated more quickly and more patches can be added to the kernel. As the Linux kernel developer community expands, it is very important to modify the controller to help keep track of all changes. Since everyone can integrate these changes into the main Linux kernel, BitKeeper and other tools are essential to ensure that patches are not forgotten and can be easily integrated and managed.

It is necessary to use a real-time, centralized file library to save the latest updates to the Linux kernel. Each change or patch accepted by the kernel is tracked as a change set. End users and developers can save their own source file archives and use a simple command as needed to update with the latest change set. For developers, this means they can always use the latest code copy. Testers can use these logical change sets to determine which changes lead to problems and shorten the time required for debugging. Even users who want to use the latest kernel can directly use real-time and centralized archives, because once the components or defect fixes they need are added to the kernel, they can be updated immediately. When the code is integrated into the kernel, any user can provide immediate feedback and defect reports on the code.

Parallel development

As the Linux kernel grows, it becomes more complex and attracts more developers to focus on specific development aspects of the kernel. Another interesting change in the Linux development method has emerged. During the development of the 2.3 Kernel version, in addition to the main kernel tree released by LinusTorvalds, there are some other kernel trees.

During the development of 2.5, the kernel tree experienced explosive growth. Because source code management tools can be used to keep development in parallel, it is possible to achieve partially parallel development. Some development needs to be parallel in order to allow others to perform tests before their changes are accepted. Kernel maintainers who keep their own trees dedicated to specific components and goals, such as memory management, NUMA components, improved scalability, and code for specific architectures, there are also some trees that collect and track the correction of many small defects.

The advantage of this parallel development model is that it allows developers who need to make major changes, or those who have made a large number of similar changes to a specific target to freely develop in a controlled environment, this does not affect the stability of the kernel used by others. After developers finish their work, they can release patches for the current Linux kernel version to achieve the changes they have made so far. In this way, testers in the community can easily test these changes and provide feedback. After each part is proved to be stable, those parts can be integrated into the main Linux kernel separately or even all at the same time.

Test in Practical application

In the past, the Linux kernel testing method was centered on the open source code development model. Since the code is published to other developers for review once it is released, there has never been a formal verification cycle similar to other forms of software development. The theoretical basis behind this method is the so-called "Linus law" in "TheCathedralandtheBazaar" (please refer to the reference materials for relevant reference ), the content of this rule is "everyone's eyes are bright ". In other words, a high-intensity review can identify most of the real big issues.

However, in fact, the kernel has a lot of complex relationships. Even if a sufficient review is conducted, many serious defects may still be missed. In addition, once the latest kernel is released, end users can (and often) download and use it. At the 2.4.0 release, many people in the community proposed more organized tests to ensure the strength of specific tests and code reviews. Organized tests include the use of test plans and repeatability during testing. Using all three methods brings higher code quality than using only two methods at first.

Linux testing project

The first contributor to the organized testing of Linux was the Linux testing project (LinuxTestProject, LTP ). This project aims to improve the quality of Linux through more organized testing methods. Part of this test project is the development of the automated test suite. The main test suite developed by LTP is also called a Linux test project. When the 2.4.0 kernel is released, the LTP test suite has only about 100 Tests. With the development and maturity of Linux 2.4 and 2.5, the LTP test suite is also developing and mature. Currently, more than 2000 tests are included in the Linux test project, and the number is growing!

Code coverage analysis

The new tool is used to provide code coverage analysis for the kernel. Overwrite analysis tells us which lines of code in the kernel are executed during a given test run. More importantly, the overwrite analysis shows which parts of the kernel have not been tested yet. This data is important because it specifies which new tests need to be written to test the kernel so that the kernel can be fully tested.

Kernel regression testing lasting for multiple days

In the 2.5 development cycle, another project used by the Linux test project was to use the LTP test suite to perform regression tests on the Linux kernel for multiple days. BitKeeper is used to create a real-time, centralized archive to obtain Linux kernel snapshots at any time. When BitKeeper and snapshot are not used, the tester has to wait until the kernel is released to start the test. Now, as long as the kernel changes, the tester can perform the test.

Another advantage of using automated tools to perform regression tests that lasted for multiple days is that they were less changed than the previous test. If a new regression defect is found, it is usually easy to detect which change may lead to this defect.

Similarly, because of the latest changes, developers are still impressed-hopefully this will make it easier for them to remember and revise the corresponding code. Perhaps the Linus law should come to the conclusion that some defects are more easily discovered than others, because they are exactly what kernel regression testing has discovered and processed for many days. These tests can be performed daily during the development cycle and before the actual release, so that testers who only focus on the full release version can focus on more serious and time-consuming defects.

Scalable testing platform

Another team named OpenSourceDevelopmentLabs (OSDL) has also made significant contributions to Linux testing. Shortly after the 2.4 kernel was released, OSDL created a system called ScalableTestPlatform (STP. STP is an automated testing platform that allows developers and testers to run the tests provided by systems on top of OSDL hardware. Developers can even use this system to test their own kernel patches. The scalable test platform simplifies the test process because STP can build the kernel, set the test, run the test, and collect results. Then obtain the result for in-depth comparison. Many people do not have access to large systems, such as SMP machines with 8 processors. with STPs, anyone can run a test on a large system like this (STPs) this is another benefit.

Tracking defects

One of the biggest improvements to the Linux kernel's organized testing since the 2.4 release is defect tracking. In the past, defects found in the Linux kernel were reported to the Linux kernel mailing list, to the mailing list of specific components or systems, or directly to the individual who maintains the part of the code that found the defects. With the increase in the number of users who develop and test Linux, the shortcomings of this system will soon be exposed. In the past, defects were often missed, forgotten, or ignored unless people were surprisingly able to maintain their reports.

Now, OSDL has installed a defect tracking system (see the link in references) to report and track Linux kernel defects. The system is configured so that when a component defect is reported, the maintainer of the component will be notified. The maintainer can either accept and fix the defect or re-specify the defect (if it is ultimately determined that it is actually another part of the kernel), it can also eliminate it (if it is ultimately determined that it is not a real defect, for example, the system with incorrect configuration ). Defects reported to the mail list are also at risk of loss because more and more emails are routed to that list. However, in the defect tracing system, there is always a record of each defect and its current status.

Large amount of information

In addition to these automated information management methods, different members of the open source code community also collect and track amazing information during the development of the 2.6Linux kernel in the future.

For example, a status list is created on the KernelNewbies site to keep track of new kernel components. This list contains entries sorted by status. if they are completed, it indicates the kernel in which they are included. if they are not completed, it indicates how long it will take. Many links on the list point to the Web site of a large project, or when the entry is small, the link points to a copy of the e-mail information that explains the corresponding part.

At the same time, the "post-halloween document" tells users what to expect from the upcoming 2.6 kernel (see the link in references ). Most of the discussions in the post-halloween document are the major changes that users need to pay attention to and the system tools to be updated (to use them ). Users who care about this information are mainly Linux publishers who want to know in advance what content is in the 2.6 kernel, as well as end users, this allows them to determine whether there are programs to be upgraded to take advantage of the new components.

The KernelJanitors project maintains a list of small defects and solutions that need to be fixed. Most of the solutions to these defects are caused by the need to change a lot of code when making major patches to the kernel. for example, in some cases, the device driver may be affected. Those who are recently engaged in kernel development can select entries in the list for their work at the beginning, so that they can learn how to write kernel code through small projects and have the opportunity to contribute to the community.

Also, in another prerelease project, JohnCherry tracked the errors and warnings found during compilation of each released kernel version. These compilation statistics continue to decline over time, and these results are published in the form of a system to make the progress clear. In many cases, part of these warnings and error messages can be used like the KernelJanitors list, because compilation errors are usually caused by small defects and need some efforts to fix them.

Finally, there is AndrewMorton's "must-fix" list. As he has been selected as the maintainer after the 2.6 kernel release, he uses his privileges to list issues that he sees as the most urgent solution before the final 2.6 kernel release. The must-fix list contains defects in the Kernel Bugzilla system, components to be completed, and other known problems. failure to resolve these problems will impede the 2.6 release. This information can help specify the steps required before the new kernel is released. for those who are concerned about when the 10 thousand kernel is expected to be released, it also provides valuable information.

Since the release of the 2.6 kernel at the end of last year, some of these materials have obviously not been maintained. Other related work is not completed after the major version is released, and later updates are required. Interestingly, we can see what has been mentioned again and what innovations have been made, and once again we are close to a major release version.

Conclusion

When most people consider a new stable version of the kernel, the first problem is usually "Is there anything new in this version ?" In fact, in addition to some new features and fixes, there is also a process of continuous improvement behind the scenes over time.

In the Linux community, open source code development is booming. The connection between the Linux kernel and other coders is loose, which allows the team to adapt to changes. In many aspects, compared to the many single improvements and defects that have been completed, linux development and testing methods-especially those methods improved over time-have far-reaching impact on the reliability of the new kernel.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.