Before because the teacher suggested running on Linux perfect on the use of PD installed Ubuntu, understanding does not go deep just stay in simple command git wget vim these, from the online brief comparison between Ubuntu and CentOS difference
1.Ubuntu
The graphical interface is beautiful, and its best application area is the desktop operating system, not the server operating system. Only installed on your own computer and not on the server, Ubuntu does not have a choice of operating system installed on the VPS
2.CentOS
A lot of commercial companies deployed in the production environment are using the CentOS system, and CentOS is a community re-release version compiled from the Rhel source code. CentOS Simple, command line under the humanization of doing a better, stable, with strong English documentation and development community support. and Redhat have the same origin. Although commercial support is not provided separately, it is often possible to find a clue from the Redhat. Compared to Debian, CentOS is slightly larger. is a very mature Linux distribution.
CentOS 7 Configuration Device
MacBook Pro (13 ")
Processor 2.7 GHz Intel Core i5
Memory 8GB 1867 MHz DDR3
Storage 128G
Virtual machine parallels Desktop 11-centos 7
1. Obtain centos-7.0-64-bit ISO image file, compress package 2G, unzip 5.7G installation successfully
Uname can also be used to check the version
2. Get Stable version 4.6.4 (release 20160711) in www.kernel.org
1. Download kernel source (download with HTTP protocol)
wget HTTPS://WWW.KERNEL.ORG/PUB/LINUX/KERNEL/V4.X/LINUX-4.6.4.TAR.XZ
After entering wget, 404 Not Found URL has errors, I will v4.x write 4.6
Later, the address was resolved after using the browser to obtain the addresses
Add a small step to correct root privileges to facilitate subsequent operations
Here's how:
(1)sudo passwd root //修改root密码跟着步骤重设新的root密码(2)su root 切换到root密码 输入新密码(3)打开vim vi /etc/sudoers修改文件 找到## all members of group ...这句将wheel前的#注释去掉
Plus common VIM instructions
Start inserting with I, x Delete,: Wq Exit for Save as read-only file: wq! Force save exit
2. Unzip the file and move to/usr/src/
TAR.XZ the corresponding decompression instruction is-JXVF
#tar -jxvf linux-4.6.4.tar.xz
Tar xvf unpacking the destination file or directory
Tar cvf the files or directories under the directory into a tar package
Tar zxvf unzip and unpack a directory or file
Tar zcvf the directory or file into a tar package and compress]
# cd /usr/src/linux-4.6.4
Go to Folder inside
! ! Clean up the previous compilation traces
If the kernel source code is just unzipped, you can skip this step, otherwise execute the following two commands:
make mrpropermake clean
3. Set Kernel compilation options
make menuconfig
The ncurses library is required to execute the command, and if an error occurs, the following command is installed ncurses: (ncurses provides a character terminal processing library, including panels and menus
installinstall gcc
Error: The initial hint was not found for OpenSSL.
sudo yum install openssl-devel
Compiling kernel compression images
make bzImage
The compilation generates a Bzimage file that corresponds to the Vmlinuz file in the/boot directory, which is a compressed kernel file. The operating system can not be executed until the file is decompressed into memory when it is started loading.
4. Compiling kernel modules
make make modules_install
While the kernel is running, it needs to load some peripheral modules (such as drivers) to run, in addition to the kernel files.
Installing kernel Modules
are typically installed in the/lib/modules directory.
5. Install the kernel
Make install
This command is simple, it generates vmlinux/system.map two files in the/boot directory, and generates menu.lst/grub.conf files in the/boot/grub directory.
Note: If you encounter an error that the kernel module cannot be found, execute the following two commands
CP arch/x86/boot/bzimage/boot/vmlinuz-4.6.4
CP system.map/boot/system.map-4.6.4
Count the time.
# time Make
6. Modify the boot order
default=0
Reboot
Before the phenomenon of unable to change the kernel, the blank grub.conf write full
No valid domains in package 0
1. How to accelerate Linux compilation in multi-core environment [reprint]
1. Tmpfs
Some people say that the use of RAMDisk in Windows to reduce a project compile time from 4.5 hours to 5 minutes, perhaps this number is a bit exaggerated, but think about it, put the file in memory to compile should be faster than on the disk, especially if the compiler needs to generate a lot of temporary files.
This approach is the lowest cost of implementation, in Linux, directly mount a TMPFS can be. And there is no requirement for the compiled project, and there is no need to change the compilation environment.
Mount-t Tmpfs Tmpfs ~/build-o size=1g
Test the compile speed with 2.6.32.2 Linux kernel:
Physical Disk: 40 minutes 16 seconds
56 seconds with tmpfs:39.
Uh...... Nothing changed. It seems that the compiler is slow to a large extent the bottleneck is not on the IO. However, for a real project, the compilation process may also have a package and other IO-intensive operations, so long as possible, using TMPFS is beneficial harmless. Of course, for large projects, you need to have enough memory to afford this tmpfs.
Make-j
Since IO is not a bottleneck, the CPU should be an important factor that affects the speed of compilation.
With Make-j with a parameter, you can put the project in parallel compilation, such as on a dual-core machine, can fully use MAKE-J4, let make up to allow up to 4 compile command execution simultaneously, so that can more effectively utilize CPU resources.
Or use kernel to test: a
16 seconds with make:40.
16 seconds with make-j4:23.
59 seconds with Make-j8:22.
As a result, the compilation speed can be significantly improved on multi-core CPUs with appropriate parallel compilation. However, parallel tasks should not be too much, generally by the CPU core number of twice times appropriate.
However, this program is not completely cost-free, if the project is not makefile, not properly set up the dependency relationship, the result of parallel compilation is not normal. If the dependency settings are too conservative, the degree of parallelism in the compilation itself may be reduced or the best effect will not be achieved.
CCache
CCache is used to cache the intermediate results of the compilation so that it can save time when compiling again. This is very good for playing kernel, because it is often necessary to modify some kernel code, and then recompile, and this two compile most of the things may not have changed. The same is true for development projects in peacetime. Why not just use the incremental compilation supported by make? Or because of the reality, because of the makefile, it is possible that this "smart" scheme simply does not work, only make clean and make only every time.
After installing CCache, you can set up GCC,G++,C++,CC symbolic link under/usr/local/bin, link to/usr/bin/ccache. All in all, make sure that the system calls to CCache when it calls the GCC and other commands (typically/usr/local/bin is in front of/usr/bin in path).
To continue testing:
First compilation with CCache (MAKE-J4): 23 minutes 38 seconds
Second compilation with CCache (MAKE-J4): 8 minutes 48 seconds
Third compilation with CCache (modify several configurations, MAKE-J4): 23 minutes 48 seconds
It seems to modify the configuration (I changed the CPU type ...) The impact on the CCache is significant, because the base header file changes, which causes all cached data to be invalidated and must be done again. But if you just change the code of some. c files, the effect of CCache is quite obvious. And the use of CCache is not particularly dependent on the project, and the cost of the deployment is very low, which is useful in everyday work.
You can use Ccache-s to view cache usage and hit conditions:
Cache Directory/home/lifanxi/.ccache
Cache Hit 7165
Cache Miss 14283
called for link 71
Not a C + + file 120
No input file 3045
Files in cache 28566
Cache size 81.7 Mbytes
Max Cache size 976.6 Mbytes
As you can see, only the second part is contained is obviously the cache hit, and the cache miss is the first and third compilation. Two times the cache occupies 81.7M of disk, or is completely acceptable.
Distcc
A machine has limited capabilities and can be combined with multiple computers to compile. This is also possible in the daily development of the company, because each developer may have their own development compilation environment, their compiler version is generally consistent, the company's network is usually better performance. This is the time for DISTCC to do his own thing.
Using DISTCC does not require every computer to have a fully consistent environment as imagined, as long as the source code can be compiled in parallel with Make-j, and the computer system that participates in the distributed compilation has the same compiler. Because it is only the principle of the pre-processed source files distributed to multiple computers, pre-processing, post-compilation of the target file links and other work other than compiling is still done on the host computer to compile, so only ask to start the compilation of the machine with a complete set of compilation environment.
After the DISTCC is installed, you can start its service:
/usr/bin/distccd–daemon–allow 10.64.0.0/16
The default port of 3632 allows DISTCC connections from the same network.
Then set the distcc_hosts environment variable to set the list of machines that can participate in the compilation. LocalHost is usually also involved in compiling, but if there are many machines that can be involved in compiling, you can remove localhost from the list so that the machine is simply preprocessed, distributed, and linked, and the compilation is done on another machine. Because of the large number of machines, localhost's processing burden is heavy, so it is no longer "part-time" compiled.
Export distcc_hosts= "localhost 10.64.25.1 10.64.25.2 10.64.25.3"
Then with ccache similar to the G++,GCC and other commonly used commands linked to/USR/BIN/DISTCC on it.
When making, you must also use the-J parameter, which is typically the number of tasks that can be used by all parameters to compile twice times the total CPU cores of the computer.
Also test:
A dual-core computer, make-j4:23 minutes, 16 seconds.
Two dual-core computers, make-j4:16 minutes, 40 seconds.
Two dual-core computers, make-j8:15 minutes, 49 seconds.
It's a lot faster than 23 minutes when I started using a dual core. If you have more computers to join, you can get better results.
You can use Distccmon-text to view the distribution of compilation tasks during compilation. DISTCC can also be used in conjunction with CCache, which is convenient by setting an environment variable.
To summarize:
TMPFS: Address IO Bottlenecks to take full advantage of native memory resources
Make-j: Making the most of native computing resources
DISTCC: Leveraging multiple computer resources
CCache: Reduces the time to compile the same code repeatedly
The benefits of these tools lie in the relatively low cost of the deployment of these tools, which can be easily and easily saved in considerable time. The most basic usage of these tools is described above, and more usage can refer to their own man page.
Common Kernel compilers
Gcc
/usr/src/
Abs
Welcome to the Csdn-markdown Editor