Android Memory Optimization Chat _android

Source: Internet
Author: User
Tags garbage collection memory usage try catch

Android memory optimization is one of the most important aspects of our performance tuning effort, and there are two main areas of work:
1, optimize RAM, that is, reduce run-time memory. The goal here is to prevent programs from oom exceptions, and to reduce the probability that a program could be killed by a LMK mechanism due to memory being too large. On the other hand, unreasonable memory usage can greatly increase the GC, causing the program to change cards.
2, optimize ROM, that is, to reduce the volume of the program account ROM. This is mainly to reduce the space occupied by the program, preventing the program from being installed due to insufficient ROM space.
This article focuses on the 1th, summary overview to reduce the application of running memory skills. Here we no longer detail the PSS, USS and other concepts and the Android application of memory management, such as interested in this part of the content, you can read the article at the end of the reference.

Detection and modification of memory leaks
memory leaks: In simple terms, because of coding errors or system reasons, there is still a direct or indirect reference to the object, causing the system to be unable to recycle. Memory leak, easy to leave logic hidden trouble, at the same time increase the application memory peak and occurrence oom probability. It belongs to the bug issue, we must revise it.

The following are some common causes of memory leaks, but how to build a closed-loop solution that discovers memory leaks and solves memory leaks is the focus of our work.

I. Monitoring scheme for memory leaks

Square's Open Source Library leakcanry is a very good choice, it detects activity or object's life cycle by weak reference way, if discovers the memory leaks automatic dump Hprof file, obtains the leakage shortest path through the haha library, Finally through the notification show.

Memory leak judgment and processing process in the following diagram, the respective running process space (the main process through Idlehandler,haha analysis using a separate process):

Micro-letter before the launch of Leakcanry has its own memory leak monitoring system, and Leakcanry roughly the following differences:

    • In the micro-letter, for more than 4.0 of the models are also adopted through the registration Activitylifecyclecallbacks interface, for 4.0 of the following models we will try to reflect the Minstrumentation object in Activitythread. Of course, now micro-letter also changed to support only android-15 above, Meimei.
    • Leakcanry Despite the use of Idlehandler and split process, but Dumphprof will still cause the application of the obvious cotton (Suspendall Thread). And in Samsung and other mobile phones, the system will cache the last activity, so in the micro-letter, we have adopted a more stringent detection mode, that is, leakage three confirmed and after 5 new activity, to ensure that it is not due to the system cache cause.
    • In the micro-letter, when the discovery of suspected memory leaks will pop up dialog box, when we actively click to do dumphprof and upload hprof snapshots of the operation, and whether false positives, leakage chain analysis work is also placed on the server side.

In fact, by making simple customizations to leakcanry, we can implement one of the following memory leak monitoring closed loops.

Two. hack Fix for system memory leaks

Androidexcludedrefs lists some examples of the inability to release references due to system reasons, while for most examples, it provides suggestions on how to fix them through hack recommendations. In the micro-letter, the Textline, Inputmethodmanager, Audiomanger, Android.os.Message also used a similar way hack. three. Reclaim memory via fallback

Activity leaks can cause the activity to refer to the bitmap, Drawingcache, etc. cannot be released, causing great pressure on memory, fallback recycling refers to the leaking activity, try to reclaim the resources held by the activity, leakage is only an activity empty shell , thus reducing the pressure on the memory.

The practice is also very simple, when the activity ondestory from the view of the Rootview, recursive release of all the child view involved in the picture, background, drawingcache, listeners and so on resources, so that the activity becomes a resource-free shell, Leaks will not cause picture resources to be held.

 
  ...
  ... drawable d = iv.getdrawable ();
  if (d!= null) {
    d.setcallback (null);
  }    
  Iv.setimagedrawable (null);
  ...
  ...

In general, we do not only know some memory leak solutions can be, more importantly, through day-to-day testing and monitoring, to obtain memory leak detection and modification of a complete set of closed-loop system.

Some ways to reduce run-time memory
When we can ensure that there is no memory leak in the application, we need some other way to reduce the run-time memory. More often than not, we just want to reduce the probability that the application will occur oom.

Android OOM:

    • Android 2.x system, >= occurs when Dalvik allocated + external allocated + new allocated size Dalvik heap oom maximum value. Where the bitmap is placed in the external.
    • The Android 4.x system, which abolishes the external counter, changes the bitmap allocations to Dalvik's Java heap, as long as allocated + newly allocated memory >= Dalvik The maximum will occur oom (the statistical rules for art operating environment are consistent with Dalvik)

I. Reducing memory consumption by bitmap

When it comes to memory, bitmap must be the big head here. For bitmap memory footprint, there are several things to say:

1, prevent the bitmap occupy resources to cause oom
After the hidden Innativealloc reflection is opened in the Android 2.x system bitmapfactory.options, the application bitmap is not counted in external. For the Android 4.x system, you can use Facebook's fresco library to put picture resources in the native.

2, the picture on-demand loading
That is, the size of the picture should not exceed the view size. Before loading the picture into memory, we need to calculate a suitable insamplesize scaling ratio to avoid unnecessary large image loading. In this case, we can overload drawable and ImageView, for example, when the activity OnDestroy, detect the size of the picture and view, if more than, you can report or hint.

3, the unified bitmap Loader
Picasso, fresco are more well-known loading libraries, the same micro-letter also has its own library Imageloader. The benefit of loading libraries is that they do not perceive version differences and sizing to the user. With a unified bitmap loader, we can reduce the bitmap format (ARGB8888/RBG565/ARGB4444/ALPHA8) by clearing the cache when a oom (try catch) occurs when loading bitmap. And so on, try again.

4, the image of the existence of pixel waste
For a. 9 diagram, the artist may have a large number of pixels duplicated in both the stretch and the non stretch areas when drawing. The custom algorithm determines whether these regions can be scaled by obtaining the pixel ARGB value of the picture and calculating the contiguous pixel area. The key is also to make these work systematized, can find the problem in time, solve the problem.

A good imageloader, can be 2. X, 4.X, or 5.X the processing of picture loading is hidden from the user, while the adaptive size, quality, etc. can also be placed in the frame.

Two. Self-memory footprint monitoring

For the system function onlowmemory and other functions are for the whole system, for this process, its Dalvik memory distance oom difference is not reflected, there is no callback function for us to release the memory in time. If you can have a set of mechanisms, real-time monitoring of the process of heap memory utilization, to achieve the set value that is to inform the relevant module for memory release, which will greatly reduce oom.

    • Implementation principle

This is actually relatively simple, through runtime to obtain maxmemory, and Totalmemory-freememory is the current real use of Dalvik memory.

Runtime.getruntime (). MaxMemory (); 
 Runtime.getruntime (). TotalMemory ()-Runtime.getruntime (). Freememory ();
    • Mode of operation

We can regularly (the front desk every 3 minutes) to get this value, when our value to reach a dangerous value (for example, 80%), we should mainly to release our various cache resources (bitmap cache for the bulk), while showing the memory to trim application, speed up memory collection.

Copy Code code as follows:
Windowmanagerglobal.getinstance (). Starttrimmemory (Trim_memory_complete);

Three. Using multiple processes

For WebView, gallery and so on, because there is a memory system leakage or excessive memory consumption problem, we can adopt a separate process. Micro-letters will now be placed in a separate tools process.

Four. Escalation Oom Details

When the system occurs oom crash, we should upload more detailed memory related information to facilitate us to locate the memory of the specific situation.

Other examples such as the use of large heap, Inbitmap, Sparsearray, PROTOBUF, etc. are no longer one by one detailed, it is not recommended to use optimized code-buried pits-optimized-buried pits. We should focus on the establishment of a reasonable framework and monitoring system, can be found in time such as bitmap too large, pixel waste, memory footprint too large, the application of oom and so on.

GC Optimization
Java has a mechanism for GC, and implementations of different system versions of GC may vary widely. However, in either version, a large number of GC operations significantly occupy the frame interval (16ms). If you do too many GC operations in a frame interval, then naturally other similar calculations, rendering, and other operations have less time available.

I. TYPE of GC

There are several types of GC, in which Gc_for_alloc are synchronized and have the most influence on the frame rate applied.

    • Gc_for_alloc

It is easy to trigger when the heap memory is not enough, especially when the new object is easily triggered, so if you want to speed up the boot, you can increase the value of dalvik.vm.heapstartsize, so you can reduce the number of Gc_for_alloc during startup. Note that this trigger is performed in a synchronized manner. If there is still no space after the GC, the heap expands

    • Gc_explicit

This GC is callable, such as SYSTEM.GC, the general GC thread priority is relatively low, so this garbage collection process does not necessarily immediately trigger, do not think that the call System.GC, memory can be improved

    • Gc_concurrent

Triggered when the allocated object is larger than 384K, note that this is recycled asynchronously. If a large number of repeated concurrent GC occurrences are found, it is possible that objects larger than 384K may have been allocated in the system, and these are often temporary objects that are repeatedly triggered. The implication to us is that there is not enough reuse of objects.

    • Gc_external_alloc (after 3.0 system was abolished)

The native layer's memory allocation fails, and this type of GC is triggered. This type of GC is often triggered frequently if the GPU's textures, bitmap, or java.nio.ByteBuffers use are not released.

Two. Memory jitter phenomenon

Memory churn Memory jitter because a large number of objects are created and released immediately in a short period of time. The instant production of a large number of objects will seriously occupy the memory area, when the threshold is reached, the remaining space is not enough, will trigger the GC so that the newly produced objects are quickly recycled. Even if each allocated object takes up a small amount of memory, they are stacked together to increase heap pressure, triggering more other types of GC. This operation may affect the frame rate and allow the user to perceive performance problems.

With memory monitor, we can track memory changes across the app. If there are multiple memory ups and downs in a short time, this means that the memory jitter is likely to occur.

Three. GC Optimization

With the heap viewer, we can look at the current memory snapshots to facilitate a comparative analysis of which objects may have leaked. A more important tool is allocation Tracker, which tracks the type, stack, size, and so on of memory objects. Hand Q has a statistical tool for the allocation tracker raw data, according to (type & stack) of the combination (stack Top 5 layer) to calculate the size of an object allocation, the number of times. At the same time according to the number of times, the size of the ranking, from multiple/large to less/small combination of code analysis, and from top to bottom of the wheel to optimize.

In this way, we can quickly know when the memory jitter occurs because of which variables are created to cause frequent GC. In general, we need to pay attention to the following aspects:

String stitching Optimization
Reduce string concatenation using the plus sign instead of using StringBuilder. Reduce Stringbuilder.enlarge, initialize the set of capacity; Here it is necessary to note that if you open the Looper printer callback, there will be more string concatenation.

 Printer logging = me.mlogging;
  if (logging!= null) {
    logging.println (">>>>> dispatching to" + Msg.target + "" +
        Msg.callback + ":" + msg.what);
  }
    • Read file optimization read file using Bytearraypool, initial set capacity, reduce expand
    • Resource Reuse

Establish a global cache pool for reuse of frequently requested and released object types

    • Reduce unnecessary or unreasonable objects

For example, in OnDraw, GetView should reduce the object request, as far as possible reuse. More are some logical things, such as the constant application of local variables in loops

    • Choose a reasonable data format using Sparsearray, Sparsebooleanarray, and Longsparsearray to replace HashMap

Summarize
We are not able to explain all the techniques used in memory optimization, and as the Android version changes, many methods may become obsolete. I think more important is that we can continue to find problems, meticulous monitoring, rather than always in the "which has the hole to fill where" the plight. Here are some suggestions for you:

1, take the lead in considering the use of existing tools; Chinese people like to repeat the wheel, we also recommend to spend energy to optimize the existing tools for the vast number of agricultural contributions. Life is not easy, code farmers What is difficult code farmers!
2, not rigidly adhere to the point, more important is how to establish a reasonable framework to avoid problems, or to be able to find problems in a timely manner.
The current micro-credit memory monitoring system also has some unsatisfactory place, in the future days also need to strive to optimize.

The above is the entire content of this article, I hope to optimize the Android memory to help.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.