"Reprint" Why <=3G is not recommended for use with CMS GC

Source: Internet
Author: User

It has been said that in the case of heap size<=3g completely do not consider the CMS GC, in the case of heap size>3g also preferred PARALLELOLDGC, not the CMS GC, only when the pause time is unacceptable to consider the CMS GC (although, of course, in general, after the heap size>8g basically have to choose the CMS GC, otherwise the pause time is quite scary, unless it is completely indifferent to the response time of the application), this is actually the official advice (JavaOne GC tuning every year).

Why give a so "arbitrary" advice, not what I do with the CMS GC, but the CMS GC has always been I love a GC implementation, the reason is recommended in the case of <=3g completely do not consider the CMS GC, mainly for the following considerations:

1, the trigger rate is not good set

In the version of JDK 1.6, the trigger ratio of the CMS GC defaults to old when used to 92%, assuming the heap size of 3G, then it means that the older generation is probably about the size of 1.5g--2.5g, assuming that the 92% trigger, That means that this time the old generation only the size of 120m--200m, usually this size is likely to lead to not enough to install the next generation of Walter objects, so need to adjust the trigger rate, but because the heap size is relatively small, this time exactly how much is very difficult to set, such as I have seen the heap Size only 1.5g,old 800m case, also use the CMS GC, the trigger ratio or 80%, this case is sad, means that the old generation as long as the use of 640m to trigger the CMS GC, as long as the application of a bit of something cache will cause frequent CMS GC.

The CMS GC is a GC that does not pause the application for most of the time, resulting in the need to set aside a certain amount of time for the CMS GC (since the application is not suspended for most of the time, which also means that the entire CMS GC process will finish longer than a full GC at PARALLELOLDGC), So that it in the recovery of memory is not allocated full, and the heap size is small, it is easy to cause frequent CMS GC, leaving less than the CMS GC is still in progress when the memory is not enough, and in the case of insufficient use of the CMS GC will degenerate to adopt serial full GC to complete the recycling action, this time is very slow.

2. Preemption CPU

CMS GC Most of the time and the application is concurrent, so it will preempt the application of the CPU, usually in the case of a more frequent CMS GC, it is obvious to see a CPU consumes very badly.

3, YGC speed slow
Because of the implementation principle of the CMS GC, which causes the object to be promoted from the Cenozoic to the old generation, it is slower to find out where to put down the step than the Parallelold GC, thus causing the YGC speed to fall somewhat.

4. The serious consequences of the debris problem

The most troublesome problem with the CMS GC is the fragmentation problem, which is also due to the implementation principle, the CMS GC in order to ensure as little as possible to suspend the application, the recovery of the memory space occupied by the compact process, resulting in the collection of objects after the entire old area will form a variety of discontinuous space, Naturally also produced a lot of debris, what will be the consequences of debris, such as obviously the old generation and 4G of free space, and the new generation even if all the 1.5g objects are alive, there will still appear promotion failed phenomenon, and in the case of this phenomenon CMS GC mostly uses serial full GC to solve the problem.

The most troubling thing about fragmentation is that you don't know when it's going to happen, so there's a chance that the app will suddenly have a long pause at the peak of the day, so it's sad and it's a disaster for many distributed scenarios that use similar heartbeats to maintain long connections or states, which is Azul's zing The biggest advantage compared to the JVM (the compact can be accomplished without pausing, solving the fragmentation problem).

Our only solution to this phenomenon is to choose to proactively trigger full GC (execute jmap-histo:live [PID]) to avoid fragmentation problems at low peak periods, but this is obviously a very wo consulted approach (because of the same impact on distributed scenarios of heartbeat or maintenance status).

5. "Unstable" of CMS GC

If you have followed the various Java issues I encountered in previous blog records (can be viewed here ), you will find a lot of different CMS GC strange problems, although most of the bugs encountered in the new version of the JVM is now fixed, But no one knows if there is a problem, after all, the implementation of the CMS GC is very complex (because to keep the object reference scan as low as possible to minimize the application pause time), and the PARALLELOLDGC implementation is relatively much simpler, so the stability is relatively high. And the other bad news is that the JVM team's energies have shifted to G1GC and other aspects, and the CMS GC has invested little (which is normal, after all, G1GC is really the direction).

In the case of large memory, the CMS GC is definitely not the choice of two, and Java in the face of more and more memory, it must be used in this way, most of the time not to suspend the application, otherwise Java is very sad, g1gc on the basis of the CMS GC, there are a lot of progress, Especially will do part of the compact, but still the fragmentation problem still exists, eh ...

Java is now facing two other big challenges in the case of large memory: 1. Parsing the memory stack is too cumbersome, for example, if you have an oom in the case of large memory, it is a cup, think about dump a dozens of G file, and then analyze how long it is, I really hope that the JDK can have a better tool in this area ... 2. The object structure is not compact enough, resulting in a very high level of Java weakness in the memory space, but this is where the new version of the JDK will focus on optimization. As for the CPU cache miss, such as the control of the language such as C, it is not the way, compared to bring the development efficiency improvement, can only be recognized, after all, now most of the scenes are engineering nature and large-scale personnel scene, so the development efficiency, maintainability will be much more important.

A few related articles are recommended:

1. A Generational mostly-concurrent GC(CMS GC theory paper)

2. The pauseless GC algorithm(a glimpse of how zing is implemented without pausing the compact)

3. Understanding CMS GC Log

Original link: http://gao-xianglong.iteye.com/blog/2179252

Also for large-scale distributed performance optimizations, see: http://gao-xianglong.iteye.com/blog/2223170

"Reprint" Why <=3G is not recommended for use with CMS GC

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.