For many developers, Java performance issues have been dubbed a dark magic title. Part of this is because of the complexity of its platform, which in many cases cannot locate the root cause of its performance problems. However, in previous techniques for Java performance, there was a tendency to think that it was made up of people's wisdom and experience rather than applying statistical and empirical reasoning. In this article, I want to verify some of the most ridiculous technical myths.
650) this.width=650; "src=" Http://183.61.143.148/group1/M00/01/F4/tz2PlFQFZ7WTem0-AABPH5jR6QU577.png "/>
1. Slow Java Operation
of all the most outdated Java performance fallacies, this is probably the most obvious statement.
Yes, Java was a bit slow in the 90 's and early 20 's.
however, after that for more than 10 years to improve virtual machines and JIT technology, the performance of the entire Java system is now surprisingly fast.
In 6 separate Web performance test baselines, the Java framework occupies 24 of the 22 top four locations.
the use of the JVM performance analysis component not only optimizes the common code path, but also results in optimizing those critical areas. The speed of JIT-compiled code is as fast as C + + in most cases.
In spite of this, the speech about the slow operation of Java is still there, presumably due to historical prejudice, and this bias comes from those who used earlier versions of Java.
2. Single-line Java code means everything is isolated
consider one of the following small pieces of code:
MyObject obj = new MyObject ();
for a Java developer, it is clear that this line of code needs to be allocated an object and run a corresponding construction method.
from here, we can start rolling out performance boundaries. We know that there are some exact quantities of work that must continue, so based on our speculation, we can calculate the performance impact.
There is a cognitive bias that, according to previous experience, any job needs to be done.
in fact, both the Javac and the JIT compiler can optimize invalid code. With the JIT compiler, the code can even be optimized based on data analysis, in which case the line code will not be run, so it will have no performance impact.
also, in some Java virtual machines (JVMS), such as JRockit, even if the code path is not completely invalidated, the JIT compiler can even perform decomposition object operations in order to avoid allocating objects.
The meaning of this text here is that context is important when dealing with Java performance issues. and premature optimization can produce unexpected results. So in order to get the best results, don't try to optimize prematurely. Rather than constantly building your code, use performance tuning techniques to locate and correct potentially dangerous areas of code performance.
3, you think a micro benchmark test represents everything
as mentioned earlier, the results of a small piece of code performance are much more sketchy than the results of the analysis applied to global performance.
Even so, developers still like to write some micro-benchmark tests. Some people are looking for the inner joy of improving some aspects of the platform's bottom-level performance, and are happy with it.
Richard Feynman once said, "the first principle is not to deceive yourself, you are the one who is most likely to be fooled." Now, when you write a Java micro benchmark, you really have to think about it.
It is difficult to write good micro-benchmark tests. In the case of esoteric, complex Java platforms, many micro-benchmarks are only capable of measuring instantaneous effects or some platform-agnostic aspects.
you only need to write a micro benchmark when developers and teams have real baseline requirements. These benchmarks should be packaged into projects (including source code) to be published along with the project, and these benchmarks should be reproducible and available for review and further review by others.
Many of the optimization results of the Java platform refer to the statistical results obtained when running only a single benchmark test case. A single benchmark test has to run multiple times to get a result that is closer to the real answer.
If you think you have to write a micro-benchmark, first read Georges, Buytaert, Eeckhout "statistically rigorous Java performance Evaluation". Without the knowledge of statistics, you will be easily misled.
There are a lot of good tools and communities on the web to help you benchmark, like Google's caliper. If you have to write a benchmark, don't go to the grindstone, you need to draw advice and experience from others.
4, the algorithm is slow is the most common cause of performance problems
A common mistake in programmers (and the general public) is that they always take for granted that part of the system they are responsible for is the most important.
As far as Java performance is concerned, Java developers believe that the quality of the algorithm is the main cause of performance problems. Developers will consider how to encode, so they will subconsciously consider the algorithm in nature.
In fact, when dealing with real-world performance problems, algorithmic design consumes less than 10% of the time to solve basic problems.
Conversely, with respect to algorithms, garbage collection, database access, and configuration errors are more likely to cause slow programs.
Most applications handle relatively small amounts of data, so even if the main algorithm is defective, it does not cause serious performance problems. Therefore, we have to admit that the algorithm is secondary to performance problems, because the inefficient phase caused by the algorithm is relatively small for the effects of other parts, and most of the performance problems come from other parts of the application stack.
so our best advice is to rely on experience and product data to find the real cause of performance problems. To collect data instead of guessing in a vacuum.
5, the cache can solve all problems
"Every question about computer science can be solved indirectly by attaching another level."
This programmer's motto comes as far as David Wheeler (thanks to the Internet, at least two other computer scientists), is strikingly similar, especially among web developers.
This fallacy often arises because of the paralysis of analysis that occurs when confronted with an existing, well-understood architecture.
instead of dealing with an intimidating existing system, developers often choose to dodge it by adding a cache and holding the most hope. This approach, of course, only complicates the entire architecture and is a bad thing for the next developer trying to understand the status of the product architecture.
It is exaggerated to say that the irregular architecture is written one line at a time and a subsystem. However, in many cases, simpler refactoring architectures have better performance, and they are almost easier to understand.
So when you evaluate whether a cache is needed, plan to collect basic usage statistics (missing rate, hit ratio, etc.) to prove that the cache layer is actually an added value.
6, all applications should take into account the STW
the "Stop-the-world" mechanism is referred to as STW, that is, when the garbage collection algorithm is executed, all other threads in the Java application except the garbage collection Helper thread are suspended)
one of the realities of the Java platform is that all application threads must periodically stop for the garbage collector GC to run. This is sometimes exaggerated as a serious weakness, even in the absence of real evidence.
empirical studies have indicated that humans often fail to perceive changes in digital data (such as price movements) that are more frequent than once every 200 milliseconds.
as a result, a useful experience with humans as a primary user is that Stop-the-world (STW) pauses of 200 milliseconds or less than 200 milliseconds are usually not considered. Some applications, such as video streaming, require a lower GC fluctuation, but many GUI applications are not.
a few applications (such as low-latency trading, or mechanical control systems) are unacceptable for a 200-millisecond pause. Unless your app belongs to that minority, it's unlikely that your users will notice any impact from garbage collection.
It is important to note that in systems with more application threads than the physical cores, the operating system Task Scheduler will interfere with the time shard access to the CPU. Stop-the-world sounds scary, but in reality, every app, whether it's a JVM or not, has to deal with content access to scarce computing resources.
If you do not measure, how the JVM's approach can affect the additional impact of application performance will be impossible to see.
Overall, determining the number of pauses actually affects the application by opening the GC log. Analyze this log (or manually, or use a script or tool) to determine the number of pauses. Then decide if this really poses a problem for your application domain. Most importantly, ask yourself one of the most acute questions: Do users really complain?
7, manual processing of the object pool for a wide range of applications are appropriate
The bad feeling of stop-the-world pauses is a common response, in the scope of the Java heap, to invent their own memory management techniques for application groups. Often this comes down to the method of implementing an object pool (or even a full reference count) and requires any code that uses the domain object to participate in it.
This technique is almost always misleading. It usually has its own long-ago roots, when object positioning is costly and sudden changes are considered unimportant. But the world is very different now.
modern hardware has unimaginable positioning efficiency, and recently the desktop or server hardware has reached at least 2 to 3GB of memory capacity. This is a big number, and in addition to professional use cases, it is not easy to fill the actual application with that large capacity.
object pooling is generally difficult to implement correctly (especially when there are multiple threads at work), and there are several negative requirements that make it a difficult choice to use as a general scenario:
In short, the object pool can only be used when the GC pauses cannot be accepted, and the smart attempt during debugging and refactoring does not reduce the pause to acceptable levels.
8, in GC CMS is always better than parallel old
The Oracle JDK defaults to using a parallel, all-stop (Stop-the-world STW) garbage collector to collect garbage from older generations.
Another option is the concurrent token Cleanup (CMS) collector. This collector allows program threads to continue to work in most GC cycles, but it requires some cost and some warning.
allowing program threads to run together with GC threads inevitably results in the mutation of the object table while affecting the activity of the object. This has to be done clearly after the occurrence, so the CMS actually has two STW stages (usually very short).
This brings some consequences:
Depending on the situation of the program these costs are either worthwhile or not. But there's no free lunch. The CMS collector is an excellent engineering product, but it is not a panacea.
so before the introduction, CMS is your correct GC strategy, you have to first consider parallel old STW is not acceptable and irreconcilable. Finally, (I cannot stress enough) to determine that all indicators are obtained from a comparable production system.
9. Adding heap memory will solve your memory overflow problem
when an application crashes and the GC aborts the runtime, many application groups resolve the problem by increasing heap memory. In many cases, this can quickly solve the problem and buy time to consider a deeper solution. However, this strategy can actually make the situation worse without really understanding the root cause of performance.
Imagine a poorly coded application that constructs very many domain objects (the lifecycle is approximately 2, 3 seconds). If the memory allocation rate is high enough, garbage collection will execute quickly and put these domain objects into the old generation. Once in the old age, the object will die immediately, but it will not be reclaimed by the garbage collector until the next full recovery.
If this application increases its heap memory, then what we can do is to increase the space, in order to store those domain objects that are relatively short-lived and then fade away. This will make the stop-the-world longer and useless for the application.
it is necessary to understand the dynamic allocation and lifecycle of objects before changing the heap memory and other parameters. Acting without investigation will only make things worse. Here, the old age distribution information of the garbage collector is very important.
Summary
intuition tends to mislead people when it comes to Java performance tuning. We need empirical data and tools to help us visualize and understand the features of the platform. Learn more about programming language Tutorials in the e-mentor Web.
This article is from the "Add Language" blog, please make sure to keep this source http://yuguotianqing.blog.51cto.com/9292883/1547979
Nine Myths about Java