GC overhead limit exceeded pits experience

Source: Internet
Author: User
Tags gc overhead limit exceeded

I encountered this problem when the local deployment throws an exception Java.lang.OutOfMemoryError:GC overhead limit exceeded cause the service to not come, view the log found to load too much resources to memory, local performance is not good, GC time consumption is more. The two ways to solve this problem are to add parameters,-xx:-usegcoverheadlimit, turn off this feature, and increase the heap size,-xmx1024m. The pit is filled, but why?

Oom Everyone knows, is the JVM memory overflow, that GC overhead limit exceed?

GC Overhead LIMT Exceed check is a strategy defined by hotspot VM 1.6 to predict whether or not to oom by counting GC time, and to throw an exception early to prevent Oom from happening. Sun officially defines this as: "The parallel/concurrent collector throws Outofmemroyerror when the GC is recycled too long." The long definition is that more than 98% of the time is spent doing GC and recovering less than 2% of heap memory. Used to avoid too little memory to cause the app to not work properly. “

Sounds useless ... What's the use of predicting oom? At first, this thing can only be used to catch and release memory resources, to avoid the application hanging out. It turns out that this strategy is not going to save your app in general, but you can make a final struggle before the app hangs up, such as data saving or saving the site (Heap Dump).

And sometimes this strategy also poses problems, such as frequent oom when loading some large memory data.

If you have encountered this problem in the production environment, do not simply guess and evade when you do not know the reason. You can see by-verbose:gc-xx:+printgcdetails What caused the anomaly. The usual reason is that the old area is too occupied to cause frequent full GC, which eventually results in GC overhead limit exceed. If GC log is not enough, you can use tools such as jprofile to see if memory is occupied, and if there is a memory leak in the old area. Analyze memory leaks There is also a way to-xx:+heapdumponoutofmemoryerror, so that when the oom will automatically do heap Dump, you can take mat to troubleshoot. Also note that the young area, if there are too many short-term objects allocated, may also throw this exception.

The log information is not difficult to understand, that is, each time the GC log, record the type of GC, size and time before and after. Let me give you an example.

33.125: [GC [defnew:16000k->16000k (16192K), 0.0000574 secs][tenured:2973k->2704k (16384K), 0.1012650 secs] 18973K->2704K (32576K), 0.1015066 secs]

100.667:[full GC [tenured:0k->210k (10240K), 0.0149142 secs] 4603k->210k (19456K), [perm:2999k-> 2999K (21248K)], 0.0150007 secs]

GC and full GC represent the stop type of GC, and full GC represents Stop-the-world. Both sides of the arrow are the size of the area before and after the GC, respectively, the young, tenured, and perm areas, with the total size of the area in parentheses. The colon is preceded by the time the GC occurs, in seconds, and from the start of the JVM. Defnew represents the serial collector, the abbreviation for default New generation, and similarly Psyounggen, which represents the parallel scavenge collector. This can be done by analyzing the logs to find the cause of GC overhead limit exceeded and resolving the problem by adjusting the corresponding parameters.

The explanation of the nouns involved in the article,

Eden Space: A heap memory pool where most objects allocate memory space.

Survivor Space: Heap Memory pool, an object stored in the GC of Eden Space.

Tenured Generation: heap memory pool, storing objects that have survived GC several times in survivor space.

Permanent Generation: A non-heap space that stores the class and method objects.

Code Cache: Non-heap space, which the JVM uses to store native code for compilation and storage.

Finally, the implementation of GC overhead limit exceed hotspot is attached:

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162 bool print_gc_overhead_limit_would_be_exceeded = false;if(is_full_gc) {  if(gc_cost() > gc_cost_limit &&    free_in_old_gen < (size_t) mem_free_old_limit &&    free_in_eden < (size_t) mem_free_eden_limit) {    // Collections, on average, are taking too much time, and    //      gc_cost() > gc_cost_limit    // we have too little space available after a full gc.    //      total_free_limit < mem_free_limit    // where    //   total_free_limit is the free space available in    //     both generations    //   total_mem is the total space available for allocation    //     in both generations (survivor spaces are not included    //     just as they are not included in eden_limit).    //   mem_free_limit is a fraction of total_mem judged to be an    //     acceptable amount that is still unused.    // The heap can ask for the value of this variable when deciding    // whether to thrown an OutOfMemory error.    // Note that the gc time limit test only works for the collections    // of the young gen + tenured gen and not for collections of the    // permanent gen.  That is because the calculation of the space    // freed by the collection is the free space in the young gen +    // tenured gen.    // At this point the GC overhead limit is being exceeded.    inc_gc_overhead_limit_count();    if (UseGCOverheadLimit) {      if(gc_overhead_limit_count() >=          AdaptiveSizePolicyGCTimeLimitThreshold){        // All conditions have been met for throwing an out-of-memory        set_gc_overhead_limit_exceeded(true);        // Avoid consecutive OOM due to the gc time limit by resetting        // the counter.        reset_gc_overhead_limit_count();      else{        // The required consecutive collections which exceed the        // GC time limit may or may not have been reached. We        // are approaching that condition and so as not to        // throw an out-of-memory before all SoftRef‘s have been        // cleared, set _should_clear_all_soft_refs in CollectorPolicy.        // The clearing will be done on the next GC.        boolnear_limit = gc_overhead_limit_near();        if(near_limit) {          collector_policy->set_should_clear_all_soft_refs(true);          if (PrintGCDetails && Verbose) {            gclog_or_tty->print_cr("  Nearing GC overhead limit, "              "will be clearing all SoftReference");          }        }      }    }    // Set this even when the overhead limit will not    // cause an out-of-memory.  Diagnostic message indicating    // that the overhead limit is being exceeded is sometimes    // printed.    print_gc_overhead_limit_would_be_exceeded = true;  else{    // Did not exceed overhead limits    reset_gc_overhead_limit_count();  }}

Reference & Extended reading:







Category: Pits Experience, JVM

GC overhead limit exceeded pits experience

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.