Linux Kernel compilation practices-linux configuration and kernel compilation utilities

Source: Internet
Author: User
Tags rpmbuild
Linux Kernel compilation practice-Linux configuration and kernel compilation utility-general Linux technology-linux programming and kernel information. For details, see the following. Many tools are used to configure and compile the kernel. Here, we will only introduce several key tools. For more information, see the relevant manual.

Make

Make is a programming language that helps compile large-scale software projects to automate their work. Correct use of Make can greatly reduce the time it takes to compile the program, because it can eliminate unnecessary re-compilation. The basic design idea of Make is that if the target file is compiled after the last modification to the source file, it is "new" and does not need to be re-compiled; if the target file is not updated in time after the last modification to the source file, the target file is "old" and needs to be re-compiled. To understand how Make executes a task, you need to understand some terms:

◆ A task to be executed by the target. In most cases, it is the name of the file to be generated, but it can only be the name of a task.

◆ Dependency the two objects are mutually dependent. If the modification of target B causes the modification of target A, it means that target A depends on target B, and B is A prerequisite for target.

◆ Variable is a carrier for storing temporary information. The variables used in Make should be enclosed in parentheses, for example, $ (TEMP ).

◆ The command used to execute a task can be one or multiple or even none.

◆ A complete rule has the following format:

Target: prerequisites)

Rule (command)

......

Only the target and other components are required. A complete rule describes the methods and dependencies for compiling a target, which is the most important part of Makefile.

◆ Makefile describes how to generate one or more target files. It lists the objects that the target depends on and provides the rules required to compile these objects correctly.

Next, we will take kbuild 2.4.23 as an example to briefly introduce the kernel build process. First, the complete kernel build process is encapsulated by the following five makefiles.

1. Makefile in the root directory

It is the most important Makefile that defines all the variables and objectives unrelated to the architecture. It reads the. config file and generates vmlinux and modules based on its information. Make recursively calls Makefile in the subdirectory to compile the two targets.

2. configuration file. config

Execute "make" to generate the configuration file in the root directory. Its content records the specific configuration options. You can also put the configuration file of the old kernel here.

3. arch/*/Makefile

This is a Makefile related to a specific architecture. It is contained in the Makefile in the root directory and provides specific information about the architecture for kbuild.

4. subdirectory Makefiles

They exist in each subdirectory, and there are hundreds of them. They accept the information passed by the upper-layer Make, construct a list of files to be compiled based on the information, and submit it to Rules. make for processing.

5. Rules. make

Almost every subdirectory Makefile contains this Makefile. Based on the list of files built by the subdirectory Makefiles, Make uses the general Rules defined by Rules. make to compile all source files from the list.

The kbuild execution process is: Make runs from the root directory Makefile and obtains the variables and dependencies unrelated to the architecture, the System-specific variables and other information are also obtained from arch/*/Makefile, which extends the variables provided by Makefile in the root directory. In this case, kbuild has all the variables and targets required to build the kernel. Then, Make enters the subdirectory and passes some variables to the subdirectory Makefile. The subdirectory Makefile determines which source files to compile Based on the configuration information, so as to build a list of files to be compiled. Finally, Rules. make decides how these files are compiled based on the defined compilation Rules.

It should be noted that, due to the lower recursion and no sequence of Make, the execution process does not fully comply with the rules of sequential row-by-row execution, but no matter how complicated the execution of Make is, there are only two stages. In the first phase, Make reads all variables and analyzes dependencies between all targets, and finally establishes a dependency tree. At the same time, all immediate variables (assigned by ": =") are extended in this process, just like the C variable. At the end of this stage, all delayed variables are extended (assigned by "= ). This requires special attention. In the second stage, Make will execute commands Based on the dependency tree.

Therefore, the order in which a target and its prerequisite rules are defined does not matter. It is very likely that the rule definition of the prerequisite for a target will appear after a hundred rows. Make will patiently read all makefiles and analyze the dependency tree.

GCC

GCC is a free GNU Compiler and the only compiler specified in the kernel. GCC performs the following steps when executing a complete compilation task:

◆ Pre-processing GCC will call the cpp program to analyze various macro commands, such as # define, # if, # include.

◆ Compile this stage to generate assembly language commands based on the input file. Since the assembler as is usually called immediately, the output is generally not stored in the file. You can use the-S option to force the compilation version of the source program.

◆ Compile the source program of the assembly language as input to generate the. o target file.

◆ The link is the last phase. In this phase, each. o module is linked together to form an executable file.

As

Users can explicitly require the use of as to directly process assembly files. The target files generated by as can be divided into text segments (. text), data segments (. data), and uninitialized data segments (. bss ).

Ld

Similar to as, users can explicitly require using the ld link program to combine several modules into a separate executable file. The link process is usually described by a file named ld link script. This script is written in Linker Command Language. Use the "ld -- verose" command to view the ld link script used by default.

Ar

Ar is a GNU binary file processing program used to create, modify, and extract files from archive files. The. a archive file generated by it is actually a library file containing many executable binary code subprograms.

RPMBuild

Use "make rpm" to make the kernel source code into an RPM package. Before that, kbuild will execute "make spec" to generate the spec file used by the rpmbuild program. For details, see "man rpmbuild ".

Middleware

Various scripts and C source files under the root directory scripts can be called middleware. They are not part of the kernel components, but are only auxiliary programs during kbuild execution. Take split-include as an example to describe the operating mechanism of the configuration file.

. Config is composed of key/value pairs. Its content is similar:

CONFIG_MPENTIUMIII = y
# CONFIG_MPENTIUM4 is not set
CONFIG_REISERFS_FS = m



This information is automatically generated when "make" is executed. At the same time, include/linux/autoconf. h is generated according to. config content. The format is similar:

# Define CONFIG_MPENTIUMIII 1
# Undef CONFIG_MPENTIUM4
# Undef CONFIG_REISERFS_FS
# Define CONFIG_REISERFS_FS_MODULE 1



By comparison, we can easily find that include/linux/autoconf. h clearly understands the intention of. config: Which components do not compile, which need to be compiled into the kernel, and which must be compiled as modules? Split-include: Create related directories and. h files under include/config/According to include/linux/autoconf. h. Each. the hfile only includes include/linux/autoconf. A line in h, such as supporting the NTFS file system When configuring the kernel option, and compiling it into the kernel, in. "CONFIG_NTFS_FS = y" will be generated in config, corresponding to include/linux/autoconf. "# define CONFIG_NTFS_FS 1" is generated in h. In this way, all C source files related to the NTFS file system will contain the include/config/ntfs/fs. h header file.

If the kernel has been compiled before and "make mrproper" has not been used,. config, include/linux/autoconf. h, and include/linux/config/will not be deleted. The configuration of the New and Old kernels is involved. A brand new kernel code is not configured. If NTFS support is added only based on the functions of the original kernel, it is a waste of time to configure NTFS from the beginning. You can continue to use the. config file of the original kernel without any changes to all the configuration information, and add new functions based on the original configuration.

In complex cases, the retained old Kernel configuration information must be compared with the new configuration information: Which old information needs to be overwritten and which need to be retained? Below are several possible situations:




(400) {this. resized = true; this. width = 400; this. alt = 'click here to open new window';} "onmouseover =" if (this. resized) this. style. cursor = 'hand'; "onclick =" window. open ('HTTP: // unix-cd.com/unixcd12/eWebEditor/UploadFile/200611391835349.jpg'); ">

The old value is saved in the. h file under include/config/, and the new value is saved in the new include/linux/autoconf. h file. The split-include Code not only describes how to handle these five situations, but also describes the file and subdirectory generation process under include/config.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.