Source code analysis: HotSpot GC Process (1), hotspotgc

Source: Internet
Author: User

Source code analysis: HotSpot GC Process (1), hotspotgc

For the garbage collection process of HotSpot virtual machines, this section analyzes and introduces the garbage collection content of MarkSweepPolicy and TenuredGeneration under the default configuration, and introduces other GC policies and GC ideas of generation implementation. The GC process is simply divided into memory generation independent GC process and memory generation GC process.
This article will first analyze the GC process of memory generation implementation, and the GC process of memory generation will be analyzed later.

Start with do_collection () of GenCollectedHeap:
1. before GC, there are many necessary checks and statistical tasks, such as statistics on memory collection and heap memory size, note that this section does not analyze performance statistics. If you are interested, you can analyze the performance statistics on your own.
(1) check whether the GC lock has been activated and set the GC flag to true. Then, you can use is_active_and_needs_gc () to determine whether a thread has triggered GC.

 if (GC_locker::check_active_before_gc()) {    return; // GC is disabled (e.g. JNI GetXXXCritical operation)  }

(2) check whether all soft references need to be recycled.

 const bool do_clear_all_soft_refs = clear_all_soft_refs ||                          collector_policy()->should_clear_all_soft_refs();

(3). Record the memory space used permanently.

const size_t perm_prev_used = perm_gen()->used();

(4) Determine whether the collection type is FullGC and gc trigger type (GC/Full GC (system)/Full GC, used for Log output ).

bool complete = full && (max_level == (n_gens()-1));    const char* gc_cause_str = "GC ";    if (complete) {      GCCause::Cause cause = gc_cause();      if (cause == GCCause::_java_lang_system_gc) {        gc_cause_str = "Full GC (System) ";      } else {        gc_cause_str = "Full GC ";      }    }

(5). gc count plus 1 operation (including total GC count and FullGC count ).

increment_total_collections(complete);

(6) count the space used by the heap.

size_t gch_prev_used = used();

(7 ). if it is FullGC, then from the highest memory generation to the lowest memory generation, if a memory generation does not want to separately recycle its lower memory generation, the memory generation is used as the initial memory generation of GC. Here we will explain what is recycled separately. For the new generation, the implementation of DefNewGeneration will use the replication algorithm for garbage collection, while the garbage collection Algorithm for the old generation TenuredGeneration will also use its tag-compression-cleaning algorithm to process the new generation. Therefore, we can say that the garbage collection of DefNewGeneration is a separate collection of new generations, while the garbage collection of TenuredGeneration is a collection of both old and lower memory generations.

  int starting_level = 0;  if (full) {      // Search for the oldest generation which will collect all younger      // generations, and start collection loop there.      for (int i = max_level; i >= 0; i--) {        if (_gens[i]->full_collects_younger_generations()) {          starting_level = i;          break;        }      }    }

2. Start from the memory generation starting from GC and recycle it to the oldest memory generation.
(1). should_collect () will return whether GC should be performed on the memory Generation Based on the GC condition of the memory generation. If the currently recycled memory generation is the oldest memory generation, if the gc is not FullGC, increment_total_full_collections () will be called to correct the previous FulllGC Count value.

   int max_level_collected = starting_level;   for (int i = starting_level; i <= max_level; i++) {      if (_gens[i]->should_collect(full, size, is_tlab)) {        if (i == n_gens() - 1) {  // a major collection is to happen          if (!complete) {            // The full_collections increment was missed above.            increment_total_full_collections();          }

(2). Count the memory usage space and other records before GC.
(3). verification.

Call prepare_for_verify () to make preparations for the verification of each memory generation (normally nothing needs to be done), and then call the Universe verify () for pre-GC verification.

if (VerifyBeforeGC && i >= VerifyGCLevel &&            total_collections() >= VerifyGCStartAt) {          HandleMark hm;  // Discard invalid handles created during verification          if (!prepared_for_verification) {            prepare_for_verify();            prepared_for_verification = true;          }          gclog_or_tty->print(" VerifyBeforeGC:");          Universe::verify(true);        }

Thread, heap (memory generation), symbol table, string table, code buffer, system dictionary, etc, for example, the heap verification will verify the type Klass of each oop object in the heap, verify whether the object is oop, whether the type klass is permanent, and whether the oop klass domain is klass. So why do we perform GC verification here? What are the functions of pre-GC verification and post-GC verification? VerifyBeforeGC and VerifyAfterGC both need to be used with UnlockDiagnosticVMOptions to diagnose JVM problems, but the verification process is very time-consuming. Therefore, verification content is not output in normal compiled versions.
(4). Save the _ save_mark_word variable from the collision pointer of the memory on behalf of each region to the region.

save_marks();

(5) initialize the reference processor.

ReferenceProcessor* rp = _gens[i]->ref_processor();if (rp->discovery_is_atomic()) {            rp->verify_no_references_recorded();            rp->enable_discovery();            rp->setup_policy(do_clear_all_soft_refs);          } else {            // collect() below will enable discovery as appropriate          }

(6). gc is completed by memory generation

_gens[i]->collect(full, do_clear_all_soft_refs, size, is_tlab);

(7) add inaccessible Reference objects to the pending linked list of the Reference

if (!rp->enqueuing_is_done()) {            rp->enqueue_discovered_references();          } else {            rp->set_enqueuing_is_done(false);          }          rp->verify_no_references_recorded();        }

Enqueue_discovered_references selects different enqueue_discovered_ref_helper () template functions based on whether the compression pointer is used. enqueue_discovered_ref_helper () is implemented as follows:
  

template <class T>bool enqueue_discovered_ref_helper(ReferenceProcessor* ref,                                   AbstractRefProcTaskExecutor* task_executor) {  T* pending_list_addr = (T*)java_lang_ref_Reference::pending_list_addr();  T old_pending_list_value = *pending_list_addr;  ref->enqueue_discovered_reflists((HeapWord*)pending_list_addr, task_executor);  oop_store(pending_list_addr, oopDesc::load_decode_heap_oop(pending_list_addr));  ref->disable_discovery();  return old_pending_list_value != *pending_list_addr;}

Pending_list_addr is the address of the first element of the pending linked list of the Private Static (class) Member of the Reference. When the reachable state of the referenced object changes in the gc stage, the Reference will be added to the pending linked list, the ReferenceHandler, a Private Static (class) Member of the Reference, will constantly retrieve the Reference from the pending linked list and add it to the ReferenceQueue.
Enqueue_discovered_reflists () has different processing methods based on whether multithreading is used. If multithreading is used, a RefProcEnqueueTask will be created and handed over to actrefproctaskexecutor for processing. Here we analyze the serial processing of a single thread:
Here, the DiscoveredList array _ discoveredSoftRefs stores a maximum of _ max_num_q * subclasses_of_ref soft referenced linked lists. After the reference linked list is processed, the starting reference of the reference linked list is set to the Sentinel reference, and the length of the reference chain is set to 0, indicating that the list is empty.

void ReferenceProcessor::enqueue_discovered_reflists(HeapWord* pending_list_addr,  AbstractRefProcTaskExecutor* task_executor) {  if (_processing_is_mt && task_executor != NULL) {    // Parallel code    RefProcEnqueueTask tsk(*this, _discoveredSoftRefs,                           pending_list_addr, sentinel_ref(), _max_num_q);    task_executor->execute(tsk);  } else {    // Serial code: call the parent class's implementation    for (int i = 0; i < _max_num_q * subclasses_of_ref; i++) {      enqueue_discovered_reflist(_discoveredSoftRefs[i], pending_list_addr);      _discoveredSoftRefs[i].set_head(sentinel_ref());      _discoveredSoftRefs[i].set_length(0);    }  }}

Enqueue_discovered_reflist () is as follows:

Retrieve the first element of the refs_list chain. next is the next element of the chain table in the discovered field.

  oop obj = refs_list.head();  while (obj != sentinel_ref()) {    assert(obj->is_instanceRef(), "should be reference object");    oop next = java_lang_ref_Reference::discovered(obj);

If next is the final sentry reference, the atom exchanges the values of the end elements and pending_list_addr of the chain table in the discovered domain, and adds them to the table header of the pending linked list, next, based on the processing method of the linked list inserted to the table header, when the pending linked list is empty, it points to itself as the next field of the end element of the table. Otherwise, direct its next field to the original Header element of the linked list, so that the element is inserted to the original header position of the pending linked list, that is:

if (next == sentinel_ref()) {  // obj is last      // Swap refs_list into pendling_list_addr and      // set obj's next to what we read from pending_list_addr.      oop old = oopDesc::atomic_exchange_oop(refs_list.head(), pending_list_addr);      // Need oop_check on pending_list_addr above;      // see special oop-check code at the end of      // enqueue_discovered_reflists() further below.      if (old == NULL) {        // obj should be made to point to itself, since        // pending list was empty.        java_lang_ref_Reference::set_next(obj, obj);      } else {        java_lang_ref_Reference::set_next(obj, old);      } 

Otherwise, if next is not the final reference, set the next field of the referenced object to next,Starting from referencing the table Header element of the linked list, convert the discovered fields used by the virtual machine to the linked list into the pending list that can be used by the Java layer.

} else {      java_lang_ref_Reference::set_next(obj, next);    }

Finally, set the discovered field of the referenced object to NULL, that is, to cut off the reference relationship of the current reference in the chain table of the discovered field, and continue to traverse the reference chain.

java_lang_ref_Reference::set_discovered(obj, (oop) NULL);    obj = next;  }

To sum up,To join a chain table, you can traverse the original discovered domain, reconnect the referenced chain table with the next domain, cut off the relationship between the discovered domain, and attach the new chain table to the table header of the pending chain table.

  (9) return to the processing after GC: update statistics and perform GC verification.

3. Output some GC log information

 

    complete = complete || (max_level_collected == n_gens() - 1);        if (complete) { // We did a "major" collection      post_full_gc_dump();   // do any post full gc dumps    }    if (PrintGCDetails) {      print_heap_change(gch_prev_used);      // Print perm gen info for full GC with PrintGCDetails flag.      if (complete) {        print_perm_heap_change(perm_prev_used);      }    }

 

4. Update the size of each memory generation

 for (int j = max_level_collected; j >= 0; j -= 1) {      // Adjust generation sizes.      _gens[j]->compute_new_size();    }

5. Update and adjust the permanent generation memory size after FullGC

if (complete) {      // Ask the permanent generation to adjust size for full collections      perm()->compute_new_size();      update_full_collections_completed();    }

6. If ExitAfterGCNum is configured, the VM is exited when the gc count reaches the maximum GC count configured by the user.

 if (ExitAfterGCNum > 0 && total_collections() == ExitAfterGCNum) {    tty->print_cr("Stopping after GC #%d", ExitAfterGCNum);    vm_exit(-1);  }

The GC memory generation implementation is independent of the following flowchart:

    

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.