Reference for improving Android source code compilation speed

Source: Internet
Author: User

The project is getting bigger and bigger. It is a waste of time to re-compile the entire project every time. After research, find the following methods to help increase the speed and summarize them.

1. UseTmpfsTo replace partial Io read/write

2.CcacheYou can set the ccache cache file on tmpfs. However, after each startup, The ccache cache file will be lost.

3.DistcC. Multi-machine Compilation

4. Move the screenOutput to memory file or/dev/nullTo avoid the slow speed of the terminal device (slow device.

 

  Tmpfs

Some people say that ramdisk is used in Windows to reduce the Compilation Time of a project from 4.5 hours to 5 minutes. Maybe this number is a bit exaggerated, put files in the memory for compilation should be much faster than on the disk, especially if the compiler needs to generate many temporary files.

The implementation cost of this practice is the lowest. in Linux, simply mount a tmpfs. There are no requirements for the project to be compiled, and no need to change the compiling environment.

Mount-T tmpfs ~ /Build-O size = 1g

Use Linux kernel 2.6.32.2 to test the compilation speed:

Physical Disk: 40 minutes and 16 seconds

Tmpfs: 39 minutes 56 seconds

Er ...... No changes. It seems that compilation is slow to a large extent and the bottleneck is not on Io. However, for a real project, Io intensive operations such as packaging may occur during the compilation process. Therefore, using tmpfs is beneficial and harmless as long as possible. Of course, for large projects, you need enough memory to afford the tmpfs overhead.

  Make-J

Since I/O is not a bottleneck, the CPU should be an important factor affecting the compilation speed.

Using make-J with a parameter, You can compile the project in parallel. For example, on a dual-core machine, you can use make-J4, make allows up to four compilation commands to be executed at the same time, so that the CPU resources can be used more effectively.

Use kernel for testing:

Make: 40 minutes and 16 seconds

Use make-J4: 23 minutes 16 seconds

Use make-J8: 22 minutes 59 seconds

From this point of view, appropriate parallel compilation on multi-core CPUs can significantly increase the compilation speed. However, parallel tasks should not be too many, generally two times the number of CPU cores.

However, this solution is not completely cost-free. If the makefile of the project is not standardized and the dependency is not correctly set, the results of parallel compilation cannot be compiled normally. If the dependency settings are too conservative, the degree of parallelism of compilation may decrease, and the best effect cannot be achieved.

  Ccache

How ccache works:
Ccache is also a compiler drive. During the first compilation, ccache caches the GCC "-e" output, compilation options, and. O files to $ home/. ccache. Use the cache whenever possible during the Second compilation and update the cache if necessary. So even "make clean; make" can benefit from this. Ccache is carefully written to ensure that the output is exactly the same as that of using gcc directly.

Ccache is used to cache the intermediate compilation results so that time can be saved during re-compilation. This is really good for playing with kernel, because it is often necessary to modify some kernel code, and then re-compile, and most of these two compilations may not change. The same is true for normal development projects. Why not use the incremental compilation supported by make directly? Or in reality, because makefile is not standardized, it is very likely that this "smart" solution cannot work normally at all. It is only necessary to make clean and make again each time.

After ccache is installed, you can create a symbolic link for GCC, G ++, C ++, and CC under/usr/local/bin and link it to/usr/bin/ccache. Make sure that the ccache can be called when the system calls commands such as GCC (usually/usr/local/bin will be placed in front of/usr/bin in path ).

Another method for installation:

VI ~ /. Bash_profile

Add the/usr/lib/ccache/bin path to the path.

Path =/usr/lib/ccache/bin: $ path: $ home/bin

In this way,/usr/lib/ccache/bin/g ++ will be started every time you start g ++, instead of/usr/bin/g ++.

The effect is the same as that of using the command line ccache g ++ :)

In this way, the ccache is automatically started when the G ++ compiler is used every time a user logs on.

Continue test:

Make-J4: 23 minutes and 38 seconds

Make-J4: 8 minutes 48 seconds

Use the third compilation of ccache (modify several configurations, make-J4): 23 minutes 48 seconds

It seems that the configuration is modified (I changed the CPU type ...) the impact on ccache is great, because after the basic header file changes, All cached data will be invalid and must be re-entered. However, if you only modify the code of some. c files, the ccache effect is quite obvious. The use of ccache is not particularly dependent on the project, and the deployment cost is very low, which is very practical in daily work.

You can use ccache-s to view Cache Usage and hit conditions:

Cache directory/home/lifanxi/. ccachecache hit 7165 cache miss 14283 called for Link 71not a C/C ++ file 120no input file 3045 files in cache 28566 cache size 81.7 mbytesmax
Cache size 976.6 Mbytes

It can be seen that the cache hit only when the second compilation was completed, and the cache miss was caused by the first and third compilation. The two caches occupy an 81.7m disk, which is completely acceptable.

  DistCC

One machine has limited capabilities and can be compiled together with multiple computers. This is also feasible in the company's daily development, because every developer may have their own development and compilation environment, and their compiler versions are generally consistent, the company's network also has good performance. It is time for DistCC to show its strength.

The use of DistCC does not require that each computer have a completely consistent environment as imagined. It only requires that the source code can be compiled in parallel using make-J, in addition, the computer systems involved in Distributed compilation have the same compiler. The principle is to distribute pre-processed source files to multiple computers, the links to the pre-processing, compiled target files, and other work except compilation are still completed on the master computer that initiates the compilation, therefore, only the machine that initiates the compilation must have a complete compilation environment.

After DistCC is installed, you can start its service:

/Usr/bin/distccd -- daemon -- allow 10.64.0.0/16

The default port 3632 allows DistCC connections from the same network.

Set the distcc_hosts environment variable and the list of servers that can be compiled. Generally, localhost is also used for compilation. However, if there are many machines that can be used for compilation, you can remove localhost from this list so that the local machine only performs preprocessing, distribution, and linking, compilation is completed on other machines. Because there are a lot of machines, the processing burden of localhost is very heavy, so it is no longer "part-time" compilation.

Export distcc_hosts = "localhost 10.64.25.1 10.64.25.2 10.64.25.3"

Then, similar to ccache, link Common commands such as G ++ and GCC to/usr/bin/DistCC.

The-J parameter must also be used in make,Generally, the number of parallel tasks can be set to twice the total number of computer CPU cores compiled by all parameters..

Perform the same test:

A dual-core computer, make-J4: 23 minutes 16 seconds

Two dual-core computers, make-J4: 16 minutes and 40 seconds

Two dual-core computers, make-J8: 15 minutes and 49 seconds

Compared with 23 minutes when a dual-core instance was first used, it was much faster. If more computers are added, you can also get better results.

During compilation, you can use distccmon-text to view the distribution of compilation tasks. DistCC can also be used with ccache at the same time. It is convenient to set an environment variable.

  Summary:

Tmpfs: solves Io bottlenecks and makes full use of local memory resources

Make-J: Make full use of local computing resources

DistCC: using multiple computer resources

Ccache: Reduces the time for repeated compilation of the same Code

The advantage of these tools is that the deployment costs are relatively low. The comprehensive use of these tools can easily save considerable time. The most basic usage of these tools is described above. For more usage, refer to their respective man pages.

5. Another way to speed up is to redirect the screen output to a memory file or/dev/null, Because blocking write operations on the terminal device (slow device) will also slow down. We recommend that you view the memory files when an error occurs.

Bytes ------------------------------------------------------------------------------------------

After reading embedded Android, we found that the usage of ccache can accelerate the compilation speed of C and C ++. The principle is that a cache of intermediate files is the same. O files Save the re-Compilation speed.

The number of C and C ++ files in Android accounts for less than half, so this speed can save a lot.

The principle is probably that it is ineffective to know the first compilation of this item. It will be effective only after the ccache is enabled and compiled once.
Usage:

Add environment variables:

1. $ export use_ccache = 1

# Creating a cache directory

2. $ export ccache_dir = ~ /. Ccache

Set cache size:

3. $ CD Android/

4. $ prebuilt/linux-x86/ccache-M 20g

You can watch ccache being used by doing the following:

$ Watch-N1-D prebuilt/linux-x86/ccache-S



From the official website, http://source.android.com/source/initializing.html#ccacheSetting up ccache

You can optionally tell the build to use the ccache compilation tool. ccache acts as a compiler cache that can be used to speed-up rebuilds. this works very well if you do "make clean" often, or if you frequentlyswitch between different build products.

Put the following in your. bashrc or equivalent.

export USE_CCACHE=1

By default the cache will be stored in ~ /. Ccache. If your home directory is on NFS or some other non-local filesystem, you will want to specify the directory in your. bashrc as well.

export CCACHE_DIR=<path-to-your-cache-directory>

The suggested cache size is 50-100gb.you will need to run the following command once you have downloadedthe source code.

prebuilt/linux-x86/ccache/ccache -M 50G

This setting is stored in the ccache_dir and is persistent.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.