Java's performance has some kind of black magic claim. Part of the reason is that the Java platform is very complex and in many cases difficult to locate. Yet there is a tendency in history to study Java performance by wisdom and experience, rather than by applying statistics and empirical reasoning. In this article, I would like to debunk some of the most absurd technology myths.
1.Java is slow.
There are many fallacies about the performance of Java, and this one is the most outdated and probably the most obvious.
Indeed, in the early 90 and at the beginning of this century, Java is sometimes very slow.
Since then, however, virtual machines and JIT technologies have improved over the past more than 10 years, and the overall performance of Java is now very good.
of the 6 independent Web performance benchmarks, the Java Framework has 22 of the top four in 24 tests.
Although the JVM uses profiling to optimize only the common code path, this optimization effect is obvious. In many cases, JIT-compiled Java code is as fast as C + +, and this is more and more the case.
Still, some people think the Java platform is slow, perhaps from the historical biases of people who have experienced the earlier versions of the Java platform.
Before we conclude, we recommend keeping an objective attitude and evaluating the latest performance results.
2. Single-line Java code can be viewed in isolation
Consider the following short line of code:
MyObject obj = new MyObject ();
For Java developers, it seems obvious that this line of code will definitely allocate an object and invoke the appropriate constructor.
We might be able to introduce a performance boundary accordingly. We think that this line of code will certainly lead to a certain amount of work, based on this presumption, you can try to calculate its performance impact.
In fact, this realization is wrong, it makes us think that no matter what work, in any case will be carried out.
In fact, both the Javac and the JIT compiler can optimize the dead code. As far as the JIT compiler is concerned, profiling data based on profiling can even optimize the code out of the forecast. In such cases, this line of code does not run at all, so it does not affect performance.
In addition, in some JVMs-such as the Jrockit--jit compiler can even decompose operations on objects so that even if the code path is still valid, the allocation operation can be avoided.
The implication here is that contexts are important when dealing with Java performance problems, and premature optimizations can produce counterintuitive results. So it's better not to optimize prematurely. Instead, you should always build your code and use performance tuning techniques to position the performance hotspots and then improve them.
3. The micro-benchmark test is the same as you think.
As we have seen above, checking a small piece of code is not as accurate as the overall performance of the analysis application.
Even so, developers prefer to write micro-benchmark tests. It seems that tinkering with some aspects of the bottom of the platform can be fun.
Richard Feynman once said: "Do not deceive yourself, you are the easiest person to be deceived." "This is a good way to explain how to write a Java micro-benchmark test," he said.
It is extremely difficult to write good micro-benchmarking tests. The Java platform is very complex, and many micro-benchmarks can only be used to measure instantaneous effects or other unexpected aspects of the Java platform.
For example, if you have no experience, a micro-benchmarking test is often measured by time or garbage collection, but it does not capture the real impact factor.
Only developers and development teams with real needs should write a micro-benchmark test. These benchmark tests should be fully disclosed (including source code) and reproducible, and should be subject to peer review and further review.
Many of the Java platform's optimizations indicate that statistical operations and single runs have a significant impact on results. To get a true and reliable answer, you should run a separate benchmark test multiple times, and then bring the results together.
If readers feel the need to write micro-benchmark tests, Georges, Buytaert, and Eeckhout's papers "Evaluate Java performance using rigorous statistical methods" (statistically rigorous Java performance Evaluation) "is a good start. Without proper statistical analysis, we can easily be misled.
There are a lot of good tools and communities around them (like Google's caliper). If it is really necessary to write a micro-benchmark test, then do not write it yourself, then need to peer opinion and experience.
4. Slow algorithm is the most common cause of performance problems
There is a common cognitive error among developers (as is the case with the general public) that the part of the system that they control is important.
This cognitive error is also reflected in the discussion of Java performance: Java developers believe that the quality of algorithms is the primary cause of performance problems. Developers are thinking about code, so they naturally tend to think about their own algorithms.
In fact, when dealing with a series of real-world performance problems, people find that algorithmic design is less than 10% of the underlying problem.
Conversely, garbage collection, database access, and configuration errors are more likely to cause applications to be slow than algorithms.
Most applications handle relatively small amounts of data, so even if the primary algorithm is inefficient, it usually does not cause serious performance problems. To be sure, our algorithm is not optimal, however, the performance problem of the algorithm is small, and more performance problems are caused by other parts of the application stack.
So our best advice is to use actual production data to uncover the real cause of performance problems. To measure performance data, not to speculate!