Android ART GC之GrowForUtilization的分析

來源:互聯網
上載者:User

Android運行過程中有多種情況會觸發記憶體回收(GC,garbage collection),以android 5.0為例,可以發現,在應用運行過程中最常見的觸發GC的方式如下圖所示:

此圖是通過android studio截取的android應用運行過程中某應用記憶體佔用情況的動態變化圖,藍色部分是應用佔用的記憶體,灰色部分是當前閒置記憶體。可以看到,在白色圈內的那點,當應用閒置記憶體到達某閾值時,android系統認為當前記憶體不太夠,所以系統喚醒GC線程來進行記憶體回收。通過logcat可以看到列印出如下log表示記憶體回收的效果。

在現在android 5.0 ART GC中,每次GC清掃玩垃圾之後,系統都會重新調整堆大小以控制堆的剩餘記憶體使其滿足預先設定好的堆利用率等限制條件(實際上,java堆在應用啟動時就已經初始化並固定到記憶體位址空間中,這裡調整堆大小的意思只是調整堆可用記憶體這個統計量,android這樣做的目的是通過動態調整堆可用記憶體這個統計量,使堆中的對象分布更加緊湊,可以稍微消除由於採取標記清楚記憶體回收演算法導致的堆記憶體片段化)。如下圖所示:

GC觸發後,回收了應用不再使用的垃圾對象,這樣可用記憶體就變大了,如上圖右邊collect garbage過程所示,但是android系統不會把這麼大一塊可用記憶體都給應用,它會根據系統預先設定的堆利用率等參數調整可用記憶體的大小,暫且把這塊調節大小後的可用記憶體稱為預留空閑記憶體,這一過程在代碼裡通過調用GrowForUtilization實現。當這塊預留空閑記憶體被應用使用差不多的時候就會觸發下次GC。通過後面的分析可以看出這個預留空閑記憶體的大小從某種意義上來說幾乎就是一個定值。

下面我就詳細分析下GrowForUtilization的實現。

GrowForUtilization在art/runtime/gc/heap.cc中實現。代碼如下:

void Heap::GrowForUtilization(collector::GarbageCollector* collector_ran) {  // We know what our utilization is at this moment.  // This doesn't actually resize any memory. It just lets the heap grow more when necessary.  const uint64_t bytes_allocated = GetBytesAllocated();  last_gc_size_ = bytes_allocated;  last_gc_time_ns_ = NanoTime();  uint64_t target_size;  collector::GcType gc_type = collector_ran->GetGcType();  if (gc_type != collector::kGcTypeSticky) {    // Grow the heap for non sticky GC.    const float multiplier = HeapGrowthMultiplier();  // Use the multiplier to grow more for    // foreground.    intptr_t delta = bytes_allocated / GetTargetHeapUtilization() - bytes_allocated;    CHECK_GE(delta, 0);    target_size = bytes_allocated + delta * multiplier;    target_size = std::min(target_size,                         bytes_allocated + static_cast<uint64_t>(max_free_ * multiplier));    target_size = std::max(target_size,                         bytes_allocated + static_cast<uint64_t>(min_free_ * multiplier));    native_need_to_run_finalization_ = true;    next_gc_type_ = collector::kGcTypeSticky;  } else {    collector::GcType non_sticky_gc_type =        have_zygote_space_ ? collector::kGcTypePartial : collector::kGcTypeFull;    // Find what the next non sticky collector will be.    collector::GarbageCollector* non_sticky_collector = FindCollectorByGcType(non_sticky_gc_type);    // If the throughput of the current sticky GC >= throughput of the non sticky collector, then    // do another sticky collection next.    // We also check that the bytes allocated aren't over the footprint limit in order to prevent a    // pathological case where dead objects which aren't reclaimed by sticky could get accumulated    // if the sticky GC throughput always remained >= the full/partial throughput.    if (current_gc_iteration_.GetEstimatedThroughput() * kStickyGcThroughputAdjustment >=        non_sticky_collector->GetEstimatedMeanThroughput() &&        non_sticky_collector->NumberOfIterations() > 0 &&        bytes_allocated <= max_allowed_footprint_) {      next_gc_type_ = collector::kGcTypeSticky;    } else {      next_gc_type_ = non_sticky_gc_type;    }    // If we have freed enough memory, shrink the heap back down.    if (bytes_allocated + max_free_ < max_allowed_footprint_) {      target_size = bytes_allocated + max_free_;    } else {      target_size = std::max(bytes_allocated, static_cast<uint64_t>(max_allowed_footprint_));    }  }  if (!ignore_max_footprint_) {    SetIdealFootprint(target_size);    if (IsGcConcurrent()) {      // Calculate when to perform the next ConcurrentGC.      // Calculate the estimated GC duration.      const double gc_duration_seconds = NsToMs(current_gc_iteration_.GetDurationNs()) / 1000.0;      // Estimate how many remaining bytes we will have when we need to start the next GC.      size_t remaining_bytes = allocation_rate_ * gc_duration_seconds;      remaining_bytes = std::min(remaining_bytes, kMaxConcurrentRemainingBytes);      remaining_bytes = std::max(remaining_bytes, kMinConcurrentRemainingBytes);      if (UNLIKELY(remaining_bytes > max_allowed_footprint_)) {        // A never going to happen situation that from the estimated allocation rate we will exceed        // the applications entire footprint with the given estimated allocation rate. Schedule        // another GC nearly straight away.        remaining_bytes = kMinConcurrentRemainingBytes;      }      DCHECK_LE(remaining_bytes, max_allowed_footprint_);      DCHECK_LE(max_allowed_footprint_, GetMaxMemory());      // Start a concurrent GC when we get close to the estimated remaining bytes. When the      // allocation rate is very high, remaining_bytes could tell us that we should start a GC      // right away.      concurrent_start_bytes_ = std::max(max_allowed_footprint_ - remaining_bytes,                                       static
相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.