Reducing IO/CPU bottlenecks: several methods to accelerate compilation on the Linux platform

Source: Internet
Author: User

The project is getting bigger and bigger. It is a waste of time to re-compile the entire project every time. After research, find the following methods to help increase the speed and summarize them.

Tmpfs

Some people say that ramdisk is used in Windows to reduce the Compilation Time of a project from 4.5 hours to 5 minutes. Maybe this number is a bit exaggerated, put files in the memory for compilation should be much faster than on the disk, especially if the compiler needs to generate many temporary files.

The implementation cost of this practice is the lowest. in Linux, simply mount a tmpfs. There are no requirements for the project to be compiled, and no need to change the compiling environment.

Mount-T tmpfs ~ /Build-O size = 1g

Use Linux kernel 2.6.32.2 to test the compilation speed:

Physical Disk: 40 minutes and 16 seconds

Tmpfs: 39 minutes 56 seconds

Er ...... No changes. It seems that compilation is slow to a large extent and the bottleneck is not on Io. However, for a real project, Io intensive operations such as packaging may occur during the compilation process. Therefore, using tmpfs is beneficial and harmless as long as possible. Of course, for large projects, you need enough memory to afford the tmpfs overhead.

Make-J

Since I/O is not a bottleneck, the CPU should be an important factor affecting the compilation speed.

Using make-J with a parameter, You can compile the project in parallel. For example, on a dual-core machine, you can use make-J4, make allows up to four compilation commands to be executed at the same time, so that the CPU resources can be used more effectively.

Use kernel for testing:

Make: 40 minutes and 16 seconds

Use make-J4: 23 minutes 16 seconds

Use make-J8: 22 minutes 59 seconds

From this point of view, appropriate parallel compilation on multi-core CPUs can significantly increase the compilation speed. However, parallel tasks should not be too many, generally two times the number of CPU cores.

However, this solution is not completely cost-free. If the makefile of the project is not standardized and the dependency is not correctly set, the results of parallel compilation cannot be compiled normally. If the dependency settings are too conservative, the degree of parallelism of compilation may decrease, and the best effect cannot be achieved.

Ccache

Ccache is used to cache the intermediate compilation results so that time can be saved during re-compilation. This is really good for playing with kernel, because it is often necessary to modify some kernel code, and then re-compile, and most of these two compilations may not change. The same is true for normal development projects. Why not use the incremental compilation supported by make directly? Or in reality, because makefile is not standardized, it is very likely that this "smart" solution cannot work normally at all. It is only necessary to make clean and make again each time.

After ccache is installed, you can create a symbolic link for GCC, G ++, C ++, and CC under/usr/local/bin and link it to/usr/bin/ccache. Make sure that the ccache can be called when the system calls commands such as GCC (usually/usr/local/bin will be placed in front of/usr/bin in path ).

Continue test:

Make-J4: 23 minutes and 38 seconds

Make-J4: 8 minutes 48 seconds

Use the third compilation of ccache (modify several configurations, make-J4): 23 minutes 48 seconds

It seems that the configuration is modified (I changed the CPU type ...) the impact on ccache is great, because after the basic header file changes, All cached data will be invalid and must be re-entered. However, if you only modify the code of some. c files, the ccache effect is quite obvious. The use of ccache is not particularly dependent on the project, and the deployment cost is very low, which is very practical in daily work.

You can use ccache-s to view Cache Usage and hit conditions:

Cache directory/home/lifanxi/. ccachecache hit 7165 cache miss 14283 called for Link 71not a C/C ++ file 120no input file
3045 files in cache 28566 cache size 81.7 mbytesmax cache size 976.6 Mbytes

It can be seen that the cache hit only when the second compilation was completed, and the cache miss was caused by the first and third compilation. The two caches occupy an 81.7m disk, which is completely acceptable.

DistCC

One machine has limited capabilities and can be compiled together with multiple computers. This is also feasible in the company's daily development, because every developer may have their own development and compilation environment, and their compiler versions are generally consistent, the company's network also has good performance. It is time for DistCC to show its strength.

The use of DistCC does not require that each computer have a completely consistent environment as imagined. It only requires that the source code can be compiled in parallel using make-J, in addition, the computer systems involved in Distributed compilation have the same compiler. The principle is to distribute pre-processed source files to multiple computers, the links to the pre-processing, compiled target files, and other work except compilation are still completed on the master computer that initiates the compilation, therefore, only the machine that initiates the compilation must have a complete compilation environment.

After DistCC is installed, you can start its service:

/Usr/bin/distccd -- daemon -- allow 10.64.0.0/16

The default port 3632 allows DistCC connections from the same network.

Set the distcc_hosts environment variable and the list of servers that can be compiled. Generally, localhost is also used for compilation. However, if there are many machines that can be used for compilation, you can remove localhost from this list so that the local machine only performs preprocessing, distribution, and linking, compilation is completed on other machines. Because there are a lot of machines, the processing burden of localhost is very heavy, so it is no longer "part-time" compilation.

Export distcc_hosts = "localhost 10.64.25.1 10.64.25.2 10.64.25.3"

Then, similar to ccache, link Common commands such as G ++ and GCC to/usr/bin/DistCC.

During make, the-J parameter must also be used. Generally, the parameter can use twice the total number of computer CPU cores compiled by all parameters as the number of parallel tasks.

Perform the same test:

A dual-core computer, make-J4: 23 minutes 16 seconds

Two dual-core computers, make-J4: 16 minutes and 40 seconds

Two dual-core computers, make-J8: 15 minutes and 49 seconds

Compared with 23 minutes when a dual-core instance was first used, it was much faster. If more computers are added, you can also get better results.

During compilation, you can use distccmon-text to view the distribution of compilation tasks. DistCC can also be used with ccache at the same time. It is convenient to set an environment variable.

Summary:

Tmpfs: solves Io bottlenecks and makes full use of local memory resources

Make-J: Make full use of local computing resources

DistCC: using multiple computer resources

Ccache: Reduces the time for repeated compilation of the same Code

The advantage of these tools is that the deployment costs are relatively low. The comprehensive use of these tools can easily save considerable time. The most basic usage of these tools is described above. For more usage, refer to their respective man pages.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.