Java garbage Collection Pristine-part4

Source: Internet
Author: User
Tags time in milliseconds

The Java garbage collection essence is divided into 4 parts, this is the 4th part of this article. The G1 collector, other concurrent collectors, and garbage collection monitoring and tuning are described in section 4.

Garbage first (G1) collector

The G1 (-XX:+USEG1GC) collector is a new collector. G1 is released with Java 6 and is officially supported in Java 7U4. It is a partially concurrent collection algorithm that compresses the old quarter by trying to add a small amount of global pauses, minimizing fullgc. The FULLGC caused by fragmentation is a big trouble for the CMS. G1 is also a generational collector, but it uses different heap organization methods than other collectors. Depending on the purpose, the G1 divides the heap into a large number (about 2000) of fixed-size zones, rather than the same-purpose heap contiguous.

G1 tags the region concurrently to track references between regions, while focusing on the maximum free space in the collection area. These areas are collected in an incremented global pause, and the surviving objects are cut into an empty area, so the whole process is compressed. The area collected in the same cycle is called collection Set.

G1 will keep track of the value of garbage accumulation in each area, the amount of space to reclaim and the time it takes to reclaim, and maintain a prioritized list in the background. Each time the collection time is allowed, priority is given to recovering the most valuable area, that is, the Garbage-first name.

Objects larger than 50% of the region's size are allocated in large areas, which can be as large as the current region's size. G1 the overhead of collecting and assigning large object operations is very high, and, more tragically, there are no optimizations yet.

The challenge for any compression collector is not to move objects, but to update references to those objects. If an object is referenced by many regions, updating these references is certainly more time-consuming than moving the object. G1 tracks those objects in the zone that are referenced by other regions through Recordset (remembered sets). A memory set is a collection of cards that are marked with updated information. If the memory set becomes larger, then the G1 will become significantly slower. When an object is transferred from one region to another, the resulting global pause time is proportional to the number of reference areas that need to be scanned and updated.

Maintaining a recordset increases the cost of a secondary collection, resulting in longer periods of time spent than the secondary collection pauses in the parallel legacy generation collector and the CMS collector.

The G1 is target-driven and can be set by the "–xx:maxgcpausemillis=<n>" delay time, the default is 200ms. This parameter only affects the workload of each cycle as much as possible, but the final effect is not guaranteed. Setting it to dozens of milliseconds is mostly futile, and dozens of milliseconds is not the target of G1 attention.

If your application can tolerate a pause of 0.5-1 seconds of incremental compression, and the application has a heap that will gradually fragment, then G1 will be a good choice for a common collector. The worst-case scenario is a pause caused by a fragment that we've seen before at the CMS. G1 tends to reduce the frequency of such pauses, as that would cost additional secondary recoveries and increase compression for older generations. Most of the pauses are constrained at the zone level rather than the entire heap of compression.

As with CMS, G1 will fail to guarantee a promotion rate and eventually turn to a global pause FULLGC. Just as there is a "concurrency mode failure" in the CMS, G1 may also encounter a transfer failure and see "Reaching a space overflow (to-space overflow)" in the log. No spare area is available for the object to move in, similar to a promotion failure. If this happens, try to use a larger heap, more markup threads, but in some cases the application needs to make changes to reduce the allocation ratio.

A challenging question for G1 is how to deal with objects and areas of high interest rates. When a live object in a region is not heavily referenced by another region, an incremental global pause compression is efficient. If an object or area is heavily referenced, the recordset will grow correspondingly large, and G1 will avoid collecting the objects. Ultimately, there is no way to compress the heap size with a medium-length pause time only frequently.

Other concurrent Collectors

CMS and G1 are often referred to as the best collectors of concurrency. However, when you look at the whole process, it is clear that the new generation, promotion, and even most of the old generation work is not concurrent at all. CMS is the best concurrency algorithm for older generations, and G1 is more like a global paused incremental collector. Both CMS and G1 clearly have regular global pauses, and in the worst case they are not suitable for strict low-latency applications, such as financial transactions or user interaction interfaces.

Other available collectors are: Oracle JRockit real time, IBM WebSphere real time, Azul Zing. JRockit and WebSphere collectors generally have better control over latency than CMS and G1, but in most cases they have a throughput limit and there is a noticeable global pause. Zing is a Java collector that batting practice knows to actually collect and compress all generations, while maintaining a high throughput rate. Zing does have some sub-millisecond global pauses, but these are done in a collection cycle unrelated to the size of the surviving object set.

JRockit Real time can control pause times to dozens of milliseconds when the heap size is appropriate, but occasionally fails to a full compression pause. WebSphere Realtime can control the pause time in milliseconds by constraining the allocation rate and the size of the survival set. The zing is concurrent at all stages to guarantee a high allocation rate, enabling sub-millisecond pauses, including the secondary collection phase. Regardless of the heap size, Zing is able to maintain consistent behavior. If you need to ensure program throughput or need to control the object model State, users can add larger heaps without worrying about increasing the pause time.

For all concurrent collectors, if you focus on latency, you must sacrifice throughput for space. Depending on the efficiency of the concurrency collector, you may give up a little bit of throughput, but always significantly increase the space. A global pause is rare if real concurrency is implemented, but it requires more CPU cores to support concurrent operations and maintain throughput.

Note: when the allocated space is sufficient, all concurrent collectors tend to run more efficiently. The first lesson is that you should budget at least two to three times times the survival set size to ensure efficient operation. However, the amount of space required for a large number of concurrent operations increases as the throughput of the application and the associated object allocation and promotion rates increase. Therefore, for high-throughput applications, it is necessary to maintain a higher heap size ratio for the surviving set. Given the huge amount of memory space today's servers have, these are not problems.

Garbage collection monitoring and tuning

To understand how your application and the garbage collector work, add at least the following parameters when you start the JVM:

1234567 -verbose:gc-Xloggc:-XX:+PrintGCDetails-XX:+PrintGCDateStamps-XX:+PrintTenuringDistribution-XX:+PrintGCApplicationConcurrentTime-XX:+PrintGCApplicationStoppedTime

Then load the logs into a tool like chewiebug for analysis.

To see that the GC is running dynamically, start JVISUALVM and install the visual GC plugin. Next you will be able to see the GC for your application, as shown in the following behavior:


To understand the requirements of the application heap GC, you need a representative load test that can be executed repeatedly. As you heap the understanding of how each collector works, run the load test through different configurations until you reach the throughput and latency you expect. From an end-user perspective, measuring latency is the most important. You can analyze more things by capturing the response time of each test request, by remembering it on a histogram, such as Hdrhistogram or disruptor histogram. If the latency spikes exceed the acceptable range, you can try to correlate the GC logs to determine if the GC is out of order. There may also be other problems that result in a spike in latency. Another tool worth considering is jhiccup, which can be used to track the JVM pauses and can be combined with the system as a whole. Using Jhiccup to measure idle systems for a few hours usually gives you a surprising result.

If the latency peaks are caused by the GC, then you can look at the CMS or G1 to see if this latency target is met. Sometimes this is not possible, because high distribution rates and promotion rates are conflicting with low latency requirements. GC tuning is a highly skilled exercise that often requires modifying programs to reduce object allocation or object life cycles. If you need to weigh time, GC resource optimization, and application modification and mastery, it may be necessary to purchase commercial concurrent compressed JVMs, such as JRockit Real and Azul Zing.

Original link: Mechanical-sympathy translation: importnew.com-Humin
Link: http://www.importnew.com/8352.html

Java garbage Collection Pristine-part4

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.