How to speed up the compilation of C + + code

Source: Internet
Author: User
Tags perl script visual studio

C + + code has been the performance of its high-performance high-profile face the world, but speaking of compiling speed, but only low-key. For example, I work now source code, even if using incredibuild transfer nearly hundred machines, a complete build also need four hours, terror!!! Although usually development does not need to do a complete build locally, but the compilation of several related projects will be enough for you to wait for some time (the foreigner tube This is called monkey around, quite image). Think of a number of years in a single core 2.8GHZ work scene-put a book in front of, a little build button, read a book for a while reading ~ ~ Memories.

It can be imagined that, if not taken seriously, the compilation speed is very likely to become a bottleneck in the development process. So why is C + + it compiles so slowly?

I think one of the most important reasons is C + + basic "header file-source file" compilation model:

Each source file, as a compilation unit, may contain hundreds or even thousands of files, and in each compilation unit, these headers are read from the hard disk and parsed.

Each compilation unit produces an obj file, and so the obj files are link together, and the process is difficult to parallel.

The problem here is the repetitive load and resolution of countless header files, as well as intensive disk operations.

Here are some ways to speed up the compilation from various angles, mainly to address the key issues raised above.

One, the code angle

Use a predecessor declaration in a header file instead of directly including a header file.

Don't think you just add a header file, because of the "included" feature of the header file, this effect can be magnified indefinitely. So, do everything possible to streamline the header file. Most of the time, the namespace of a class in a certain category will be more painful, and direct include can be very convenient, must resist this temptation; class members, function parameters, etc. also use references, pointers, to create conditions for the predecessor declaration.

Using Pimpl mode

Pimpl are all called private implementation. The traditional C + + class interfaces and implementations are confused, and pimpl this approach makes the interface and implementation of the class completely separate. Thus, as long as the common interface of the class remains unchanged, the modification to the class implementation is always simply to compile the CPP, and the class's header file is also streamlined.

High degree of modularity

Modularity is low coupling, which is to reduce interdependence as much as possible. There are actually two levels of meaning here. One is between file and file, a header file changes, try not to cause other documents to recompile; the second is between engineering and engineering, to a project modification, try not to cause too many other works of compilation. This requires that the header file, or the content of the project must be single, not everything into the inside plug, thus causing unnecessary dependence. This can also be said to be cohesion of it.

Take the header file, for example, not to put two unrelated classes, or a macro definition with no links, into a header file. Try to be as single as possible so that the files that contain them do not contain unwanted content. Remember we have done such a thing, the code in the most "hot" of those header files to find out, and then divided into several separate small files, the effect is considerable.

In fact, the refactoring that we did last year, separating many DLLs into the UI and core two parts, has the same effect-improving development efficiency.

Delete Redundant header files

Some of the code after a decade of development and maintenance, countless people, most likely to contain a useless header file, or repeat the inclusion of the phenomenon, remove these redundant include is quite necessary. Of course, this is primarily for CPP, because it is difficult to define whether an include in a header file is redundant or not in the final compilation unit, which may appear in a case where a compilation unit is used and is not available in another compilation unit.

A Perl script has been written to automatically remove these redundant headers, removing up to 5,000 or more include in a project.

Pay special attention to inline and template.

This is the two more "advanced" mechanisms in C + +, but they also force us to include the implementation in the header file, which is a great contribution to increasing the content of the header file and slowing the compilation speed. Before using, weigh it.

Ii. Comprehensive Skills

Precompiled header File (PCH)

Put some common but infrequently modified header files in the precompiled header file. This way, at least in a single project you don't need to load and parse the same header file over and over again in each compilation unit.

Unity Build

The Unity build approach is simple, incorporating all the CPP into one CPP (ALL.CPP) and compiling only the all.cpp. So we have only one compilation unit, this means that there is no need to repeat the load and parse the same header file, and because only one obj file, the link does not need to be so dense disk operation, the estimated 10x can be improved to see the video to feel its practice and speed bar.

CCache

Compiler cache, through the cache on the results of a compilation, so that the rebuild in the same situation to keep the results, greatly improve the speed. We know if it's a build, the system compares the time between the source code and the target code to decide whether or not to recompile a file, a method that is not entirely reliable (such as taking a previous version of the code from SVN), whereas CCache's principle of judgment is the content of the file, which is relatively reliable. Unfortunately, Visual Studio does not yet support this feature-it can actually add a new command, such as a cache build, between build and rebuild, so that rebuild can be basically unused.

Don't have too many additional include directories

The compiler locates the header file you include, which is searched according to the include directories you provide. As you can imagine, if you provide 100 include directories, and a head file is in the 100th directory, the process of locating it is very painful. Organize your inclusion directory and try to keep it simple.

Third, compiling resources

To improve speed, or to reduce the task, or to deploy more staff, the first two is said to reduce the task, and in fact, in the increase in the speed of compilation, more staff still has a very important role.

Parallel compilation

Buy a 4-core, or 8-core CPU, each time a build, is 8 files parallel to the compilation, that speed, look at all cool. If your boss disagrees, let him read this article: hardware is Cheap, programmers are expensive

A better disk

We know that the slower compilation is due in part to disk operations, so in addition to minimizing disk operations as much as possible, all we can do is speed up the disk. For example, when the top 8 cores work, the disk is extremely likely to become the biggest bottleneck. Buy a 15000-turn disk, or SSD, or RAID0, in short, the faster the better.

Distributed compilation

The performance of a machine is always limited, the use of idle CPU resources in the network, and the build server specifically used to compile to help you compile to fundamentally solve our problem of compiling speed, think of the original build 1-hour project in 2 minutes can be done, You know you must not have it-incredibuild parallel, in fact, you can do so.

This is a more extreme situation, if you use the IncrediBuild, the final compilation speed is still not satisfied, how to do? In fact, just jump out of the frame of mind, compiling speed can still have a qualitative leap-if you have enough machines:

Suppose you have solution A and solution b,b depend on a, so you have to build B after a. The A,b build takes 1 hours, and the total is 2 hours. But does B have to build after a? Out of this frame of mind, you have the following scenario:

Start build A and B at the same time.

The build of a is successful, although B's build failed, but all just failed on the last link.

Re-project in link b.

So, by letting a build run parallel to B's, and finally link to project B, the entire compile speed should be able to be controlled within 1个小时15分钟.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.