Efficiency
I suspect that some people do a secret Pavlov experiment with C + + software developers, or why many programmers drool when it comes to the word "efficiency". (Scott Meyers is a true humor translator)
In fact, efficiency is not a matter of joking. A program that is too big or too slow their merits, no matter how compelling, are not acceptable to people. It's supposed to be like this. Software is used to help us to work better, said the slow is better, said that the need for 32MB of memory program than just need 16MB memory of the program, said the use of 100MB disk space program is more than just occupy 50MB disk space program is good, this is simply nonsense. And while there are programs that take up more time and space for more complex operations, many programs can only be attributed to their poor design and sloppy programming.
Before writing an efficient program in C + +, it is important to recognize
C + + itself is absolutely irrelevant to any performance problems you encounter。
If you want to write an efficient C + + program, you must first be able to write an efficient algorithm. Too many developers are ignoring this simple truth.
Yes, loops can be manually expanded, and shift operations (shift operation) can replace multiplication,But if you're using a high-level algorithm whose intrinsic efficiency is low, these tweaks will have no effect. Do you still use the two-time equation algorithm when the linear algorithm is available? Do you calculate the repeated values over and over again? If so, it's no exaggeration to liken your program to a second-rate tourist spot, which is worth taking a look at if you have extra time.
The contents of this chapter illustrate the problem of efficiency from two angles.The
first is from the perspective of language independenceand focus on what you can use in any language. C + + provides them with a particularly appealing way to achieve
because it supports encapsulation very well, it can be used to replace inefficient similar implementations with better algorithms and data structures, while the interface remains the same。
the second is to focus on the C + + language itself. High-performance algorithms and data structures are very good, but if the code in the actual programming is very rough, the efficiency will be reduced quite a lot. The most potentially damaging error is a mistake that is both easy to detect and not easy to perceive,
It is a mistake to construct and release a large number of objects in a complex way.。 Too much object construction and object release is like a massive hemorrhage in your program's performance, and in the process of setting up and releasing unwanted objects every time, precious time flows away. This problem is common in C + + programs, and I will use four clauses to illustrate where these objects come from, and how to eliminate them without affecting the correctness of the program code.
built
a large number of objects will not make the program larger and will only make it run slower. There are other factors that affect performance improvements, including the selection of libraries and the implementation of language features (implementations of language features). I will also be involved in the following clauses.
After learning the content of this chapter, you will be familiar with several principles that can improve the performance of your program, which apply to any program you write. You will know how to accurately prevent unwanted objects from appearing in your software and have a keen sense of the compiler's behavior in generating executable code.
Vulgar Be prepared to say (forewarned is forearmed). So think of the following as a preparation before the battle.
Item M16: Keep in mind the 80-20 guidelines (80-20 rule)
The 80-20 guideline says that about 20% of the code uses 80% of the program resources; about 20% of the code consumes about 80% of the elapsed time; about 20% of the code uses 80% of the memory; about 20% of the code does 80% of the disk access ; 80% of the maintenance is invested in about 20% of the code;Experiments on countless machines, operating systems, and applications have been repeatedly validated. The 80-20 guideline is not just a good-to-remember idiom, it is a guideline on system performance, it has a wide applicability and a solid experimental foundation.
When you think about the 80-20 criteria, don't get tangled up in the numbers, some people like stricter 90-10 guidelines, and there's some experimental evidence to support it. No matter what the exact number is, the basic point is the same:
The overall performance of the software depends on a small part of the code composition.
When programmers strive to maximize the performance of their software, the 80-20 guidelines simplify your work and complicate your work. On the one hand, the 80-20 guideline means that most of the time you can write code that is generally performance, because the efficiency of the code in 80% of the time does not affect the performance of the entire system, which reduces some of your work stress. On the other hand, this rule also says that if your software has a performance problem, you will face a difficult job, because you must not only find the location of the small piece of code that caused the problem, but also find ways to improve their performance. The most difficult of these tasks is generally to find system bottlenecks. There are basically two different ways to look for: the way most people use it and the right way.
the way most people look for bottlenecks is to guess. Through experience, intuition, fortune-telling cards, Ouija boards, rumors, or something more ridiculous, one programmer is seriously claiming that the performance of the program has been found, because of delays in the network, incorrect memory allocations, insufficient optimizations by the compiler, or the refusal of some idiot executives to use assembly statements in critical loops. These evaluations are always published in a mock-up of domineering, often mistaken for the derision and their predictions.
Most programmers have a bad intuition about their program performance characteristics,Because program performance characteristics are often not determined by intuition. The result is a great deal of effort to improve the efficiency of each part of the program, but it has no significant effect on the overall behavior of the program. For example, in a program, you can use a unique algorithm and data structure that minimizes the amount of computation, but it doesn't work if the program's performance limits are primarily on I/O (i/o-bound). Using a library with strong I/O performance instead of the compiler itself (see article M23), this approach does not work if the program's performance bottlenecks are primarily on the CPU (Cpu-bound).
In this case, how do you deal with programs that run slowly or consume too much memory? 80-20 the meaning of the guidelines is that it is impossible to improve the efficiency of a part of a program without much help. The fact that program performance characteristics are often not intuitively determined means that trying to guess a performance bottleneck is unlikely to be better than a way to improve the efficiency of a part of the program at random. So what happens after that?
The result is an experience-guessing program that 20% of the parts will only cause you heartache.
the correct approach is to use the profiler program to identify 20% parts of the annoying program. Not all work is done by the profiler. You want it to directly measure the resources you're interested in. For example, if the program is too slow, you want Profiler to tell you how much time is spent on each part of the program. Then you focus on the areas where local efficiency can be greatly improved, and this will greatly improve the overall efficiency.
The profiler tells you how many times each statement was executed or how many times each function was called, a tool with limited usefulness. From the point of view of improving performance, you don't have to worry about a statement or how many times a function has been called. After all, few callers to the user or library complained about executing too many statements or calling too many functions. If the software is fast enough, no one cares how many statements are executed, and if the program runs too slowly, no one cares how little the statement is. What they care about is that they hate waiting, and if your program keeps them waiting, they will hate you.
But
knowing the frequency of statement execution or function calls can sometimes help you gain insight into the behavior inside the software。 For example, if you set up 100 objects of some type, you will find that you have thousands of constructors that call the class, and this information is undoubtedly valuable. And the number of calls to statements and functions can indirectly help you understand the behavior of software that cannot be measured directly. For example, if you cannot directly measure the use of dynamic memory, it is helpful to know how often the memory allocation function and the memory-release function are called. (i.e., operators New, new[], delete, and delete[]-see clause M8)
Of course, even the best profiler is affected by the data it handles. If you profile your program with a lack of representative data, you cannot complain that the profiler has caused you to optimize the 80% of the program's parts, and thus have no effect on the usual performance of the program.
Remember that the profiler can only tell you how a program is running at one time (or several times), so if you profile a program with non-representative input data, you do not have a representative type. On the contrary, it is likely to lead you to optimize the less commonly used software behavior, while in the common areas of software, the overall efficiency of the software is reversed (that is, efficiency decline).
The
best way to prevent this incorrect result is to profile your software with as much data as possible. In addition, you must ensure that each set of data is representative of how the software is used by the customer (or at least the most important customer). It is often easy to get representative data because many customers are willing to let you use their data for profile. After all, you are optimizing the software for their needs.
More effective C + +----Efficiency & (16) Keep in mind 80-20 guidelines (80-20 rule)