Analysis of the growforutilization of the Android ART GC

Source: Internet
Author: User
Tags garbage collection reserved

There are a variety of scenarios in the Android runtime that trigger garbage collection (Gc,garbage collection), and in the case of Android 5.0, it can be found that the most common way to trigger a GC during the application run is as shown in the following illustration:

This diagram is a dynamic diagram of an application memory footprint that is captured by Android studio during the operation of the Android application, and the blue portion is the memory used for the application, and the gray part is the current idle memory. As you can see, in the white circle, when the application of free memory reaches a threshold, the Android system thinks the current memory is not enough, so the system wakes up the GC thread for garbage collection. By Logcat, you can see the effect of a garbage collection that prints the following log.

In the Android 5.0 ART GC, every time GC cleans up garbage, the system resizes the heap to control the heap's remaining memory to meet constraints such as the pre-set heap utilization (in fact, the Java heap is initialized and pinned to the memory address space when the application starts up). The idea here is to adjust the heap size just to adjust the amount of available memory in the heap, and the purpose of Android is to make the objects in the heap more compact by dynamically adjusting the amount of available memory on the heap, which can slightly eliminate heap memory fragmentation due to the marked garbage collection algorithm. As shown in the following illustration:

After the GC is triggered, the garbage object that the application is no longer used is recycled, so that the available memory becomes larger, as shown in the Collect garbage procedure on the right side of the illustration, but the Android system does not apply such a large chunk of available memory. It adjusts the amount of available memory according to the system's predetermined heap utilization parameters, which, for the time being, is called reserved free memory, which is implemented in code by calling Growforutilization. The next GC is triggered when this block of reserved free memory is used almost by the application. The following analysis shows that the size of the reserved free memory is almost a fixed value in a sense.

Below I will be detailed analysis of the implementation of the growforutilization.

Growforutilization is implemented in art/runtime/gc/heap.cc. The code is as follows:

void Heap::growforutilization (collector::garbagecollector* Collector_ran) {//We know what our utilization are at this m
  Oment. This doesn ' t actually resize any memory.
  It just lets the heap grow more when necessary.
  Const uint64_t bytes_allocated = getbytesallocated ();
  Last_gc_size_ = bytes_allocated;
  Last_gc_time_ns_ = Nanotime ();
  uint64_t target_size;
  Collector::gctype Gc_type = Collector_ran->getgctype ();
    if (Gc_type!= collector::kgctypesticky) {//Grow the heap for non sticky GC.  Const FLOAT multiplier = Heapgrowthmultiplier ();
    Use the ' multiplier to grow ' for//foreground.
    intptr_t delta = bytes_allocated/gettargetheaputilization ()-bytes_allocated;
    Check_ge (delta, 0);
    target_size = bytes_allocated + Delta * multiplier; target_size = Std::min (target_size, bytes_allocated + static_cast<uint64_t> (max_free_ * mul
    Tiplier)); target_size = Std::max (target_size, bytes_allocated + static_cast<uint64_t> (min_free_ * multiplier));
    Native_need_to_run_finalization_ = true;
  Next_gc_type_ = Collector::kgctypesticky; else {Collector::gctype Non_sticky_gc_type = have_zygote_space_? collector::kgctypepartial:collector::kg
    Ctypefull;
    Find what the next non sticky collector'll be.
    collector::garbagecollector* non_sticky_collector = Findcollectorbygctype (Non_sticky_gc_type); If the throughput of the current sticky GC >= throughput of the non sticky-collector, then//do another sticky
    Collection next. We Also check that this bytes allocated aren ' t over the footprint limit into order to prevent a//pathological case W Here dead objects which aren ' t reclaimed by Sticky could get accumulated//If the sticky GC throughput always
    D >= the full/partial throughput. if (Current_gc_iteration_. Getestimatedthroughput () * kstickygcthroughputadjustment >= non_sticky_collector-&Gt Getestimatedmeanthroughput () && non_sticky_collector->numberofiterations () > 0 && by
    tes_allocated <= max_allowed_footprint_) {next_gc_type_ = Collector::kgctypesticky;
    else {next_gc_type_ = Non_sticky_gc_type;
    }//If We have freed enough memory, shrink the heap back down.
    if (bytes_allocated + max_free_ < Max_allowed_footprint_) {target_size = bytes_allocated + max_free_;
    else {target_size = Std::max (bytes_allocated, static_cast<uint64_t> (Max_allowed_footprint_));
    } if (!ignore_max_footprint_) {setidealfootprint (target_size);
      if (Isgcconcurrent ()) {//Calculate when to perform the next CONCURRENTGC.
      Calculate The estimated GC duration. Const double gc_duration_seconds = Nstoms (current_gc_iteration_.
      Getdurationns ())/1000.0;
      Estimate how many remaining bytes we'll have when we need to start the next GC. size_t Remaining_bytes = allocation_rate_ * gc_duration_seconds;
      Remaining_bytes = Std::min (remaining_bytes, kmaxconcurrentremainingbytes);
      Remaining_bytes = Std::max (remaining_bytes, kminconcurrentremainingbytes); if (Unlikely (Remaining_bytes > Max_allowed_footprint_)) {//A never going to happen situation this from the ES timated allocation Rate We'll exceed//the applications entire footprint with the given estimated allocation RA Te.
        Schedule//Another GC nearly straight away.
      Remaining_bytes = kminconcurrentremainingbytes;
      } dcheck_le (Remaining_bytes, Max_allowed_footprint_);
      Dcheck_le (Max_allowed_footprint_, Getmaxmemory ()); Start A concurrent GC when we are close to the estimated remaining bytes. When the//allocation rate are very high, remaining_bytes could tell us this we should start a GC//right Awa
                 Y. Concurrent_start_bytes_ = Std::max (Max_allowed_footprint_-Remaining_bytes,                      Static 
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.