"Go" how to avoid oom summary

Source: Internet
Author: User
Tags try catch

Original address: HTTP://WWW.CSDN.NET/ARTICLE/2015-09-18/2825737/3

Reduce the memory footprint of an object

The first step to avoiding oom is to try to minimize the amount of memory that is allocated to new objects and use lighter objects as much as possible.

1) Use more lightweight data structures

For example, we might consider using traditional data structures such as Arraymap/sparsearray instead of HashMap. Figure 8 illustrates how HashMap works, and, in most cases, shows inefficiencies and memory compared to the Arraymap containers written specifically for the mobile operating system by Android. The usual implementation of HashMap consumes more memory because it requires an additional instance object to record the mapping operation. In addition, Sparsearray is more efficient in that they avoid the automatic boxing of key and value (autoboxing), and avoid the boxed unpacking.


Figure 8 HashMap Brief Working principle

For more Arraymap/sparsearray discussion, refer to the first three paragraphs of "Android performance Optimization Model (iii)".

2) Avoid using enum in Android

The official Android training course mentions "Enums often require more than twice as much memory as static constants. You should strictly avoid use enums on Android. "For specific principles, please refer to" Android Performance Optimization Model (iii) ", so please avoid using the enumeration in Android.

3) Reduce memory consumption of bitmap objects

Bitmap is an extremely easy to consume memory of the big fat, reduce the amount of memory created by bitmap is the most important, usually has the following 2 measures:

    • Insamplesize: Scaling, before loading the image into memory, we need to calculate an appropriate scale to avoid unnecessary large image loading.
    • Decode format: Decoding formats, select Argb_8888/rbg_565/argb_4444/alpha_8, there is a big difference.
4) Use smaller images

When it comes to giving a picture of a resource, we need to pay special attention to whether the image has a space that can be compressed, or if it can use a smaller image. Using smaller images as much as possible will not only reduce the use of memory, but also avoid a lot of inflationexception. Given that a large picture is directly referenced by an XML file, it is possible that the root cause of the problem is an oom when initializing the view inflationexception due to insufficient memory.

Re-use of memory objects

Most of the reuse of objects, the final implementation of the scheme is to take advantage of the object pool technology, or to write code explicitly in the program to create an object pool, and then handle the reuse of the implementation logic. It is either to use some of the existing reuse features of the system framework to reduce the duplication of object creation, thus reducing the allocation and recycling of memory (9).


Figure 9 Object Pooling Technology

One of the most commonly used caching algorithms on Android is LRU (Least recently use), as shown in the brief operation Principle 10.


Figure ten LRU brief operation principle

1) Reuse of the system's own resources

The Android system itself has a lot of resources built into it, such as strings, colors, pictures, animations, styles, and simple layouts, all of which can be referenced directly in the application. This will not only reduce the application's own load, reduce the size of the APK, but also to a certain extent, reduce the cost of memory, reusability is better. However, it is also important to keep an eye on the version differences of the Android system, whether there is a big difference in the performance of the different system versions, does not meet the requirements, or needs to be built into the application itself.

2) Note the reuse of Convertview in the view of a large number of repeating subcomponents, such as Listview/gridview, 11 shows.


Figure 11

3) Reuse of bitmap objects

In controls that display a large number of pictures, such as the ListView and GridView, you need to use the LRU mechanism to cache the processed bitmap,12 as shown.


Figure 12

    • use the advanced features of inbitmap to improve the efficiency of the Android system in bitmap allocation and release (note: There are some differences in usage limits after 3.0 and 4.4). using the Inbitmap property, you can tell the bitmap decoder to try to use an existing memory area, and the newly decoded bitmap will attempt to use the pixel data memory area occupied by bitmap in the heap. Instead of asking for memory to reapply an area to store bitmap. With this feature, even thousands of images will only need to occupy the memory size of the number of images that the screen can display, as shown in 13.


Figure 13 improved Android bitmap allocation and release execution efficiency with INBITMAP advanced features

There are several restrictions that you need to be aware of using Inbitmap:

    • The bitmap size of the reuse must be consistent between the SDK and 18. For example, to assign a inbitmap to the image size of 100-100, then the new application bitmap must also be 100-100 to be able to be reused. Starting with SDK 19, the bitmap size of the new application must be less than or equal to the bitmap size already assigned.
    • The bitmap of the new application must have the same decoding format as the old bitmap. For example, everyone is 8888, if the previous bitmap is 8888, then can not support the 4444 and 565 format bitmap. We can create an object pool that contains a number of typical reusable bitmap, so that subsequent bitmap creation can find the appropriate "template" to reuse, as shown in 14.


Figure 14

Also, on a 2.x system, although the bitmap is assigned to the native layer, it is still not possible to avoid being counted into the Oom reference counter. Here is a hint, many applications will be reflected vbitmapfactory.options inside the Innativealloc to expand the use of memory, but if we do so, the overall system will cause a certain negative impact, it is recommended to adopt carefully.

4) Avoid executing object creation in OnDraw method

Similar to the frequent call method such as OnDraw, it is important to avoid doing the creation of objects here, because he will quickly increase the use of memory, but also easily caused by frequent GC, or even memory jitter.

5) StringBuilder

In some cases, the code will need to use a large number of string concatenation operations, it is necessary to consider the use of StringBuilder to replace the frequent "+".

Avoid memory leaks for objects

The leakage of memory objects can cause some objects that are no longer in use to be released in a timely manner, which takes up valuable memory space and easily leads to an oom when there is insufficient free space to allocate memory. Obviously, this also makes the memory area of each level of the generation space smaller, the GC is more likely to be triggered, prone to memory jitter, causing performance problems (15).


Figure 15

The latest Leakcanary open source controls, which can help us find out about memory leaks and more about leakcanary, see here (Chinese instructions for use). You can also use the traditional Mat tool to find memory leaks, please refer to here (convenient Chinese information).

1) Be aware of activity leaks

In general, the leakage of activity is the most serious problem in memory leak, it occupies a lot of memory, the impact of a wide range, we need to pay special attention to the following two scenarios caused by activity leaks:

    • Internal class references result in activity leaks

The most typical scenario is an activity leak caused by handler, and if there are deferred tasks in handler or if the queue of tasks waiting to be executed is too long, it is possible that the activity will leak because handler continues to execute. The reference chain at this time is Activity, Handler, Message, MessageQueue, Looper. To solve this problem, you can execute the message in the Remove handler message queue with the Runnable object before the UI exits. or use static + WeakReference to achieve a referential relationship between the disconnected handler and the activity.

    • Activity context is passed to other instances, which can cause itself to be referenced and leak.

Internal classes are caused by leaks that occur not only in the activity, but in any other internal class where there is a need for special attention! We can consider using the static type's inner classes as much as possible, while using the weakreference mechanism to avoid leaks that occur as a result of mutual references.

2) Consider using application context instead of activity context

For most situations where the activity context is not required (the context of dialog must be the activity context), we can all consider using application context instead of the activity context , so you can avoid inadvertent activity leaks.

3) Pay attention to the timely recovery of temporary bitmap objects

While in most cases we will increase the caching mechanism for bitmap, some of the bitmap need to be reclaimed in a timely manner. For example, a relatively large bitmap object that was created temporarily, after being transformed to get a new bitmap object, should reclaim the original bitmap as soon as possible, so that the space occupied by the original bitmap can be released faster.

Special attention is given to the CreateBitmap () method provided in the bitmap class, which is shown in 16:


Figure CreateBitmap () method

The bitmap returned by this function may be the same as the source bitmap, and at the time of collection, it is necessary to check whether the source bitmap is the same as the return bitmap reference, and only if the source can be executed in unequal circumstances. The Recycle method of bitmap.

4) Note the logoff of the listener

There are many listeners in the Android program that need register and unregister, and we need to make sure that the listeners are unregister at the right time. Manually add the listener, need to remember to remove this listener in time.

5) Notice the object leaks in the cache container

Sometimes we put some objects into the cache container in order to improve the reusability of objects, but if these objects are not purged from the container in time, it is possible to cause a memory leak. For example, for a 2.3 system, if you add drawable to the cache container, because of the strong application of drawable and view, it is easy to cause the activity to leak. And from 4.0 onwards, there is no such problem. To solve this problem, we need to do a special encapsulation of the cache drawable on the 2.3 system, handle the problem of reference unbind, and avoid the situation of leaking.

6) Notice the leakage of WebView

Android WebView There is a lot of compatibility issues, not only the Android version of the different webview produced a great difference, in addition to different manufacturers shipped ROM inside WebView There is a great difference. More serious is the problem of a memory leak in the standard WebView, see here. So the usual way to solve this problem is to open another process for webview, through the aidl and the main process to communicate, webview the process can be based on the needs of the business to choose the right time to destroy, so as to achieve the full release of memory.

7) Note whether the cursor object is closed in time

In the program we often do the query database operation, but there are often inadvertently use the cursor after the case is not closed in a timely manner. These cursor leaks, repeated occurrences of the words will have a great negative impact on memory management, we need to remember to close the cursor object in a timely manner.

Memory usage Policy optimization 1) use large heap sparingly

As mentioned earlier, Android devices have different sizes of memory space depending on the hardware and software settings, and they set a different size heap limit threshold for the application. You can get the available heap size for your app by calling Getmemoryclass (). In some special scenarios, you can declare a larger heap space for your app by adding Largeheap=true properties under the Application tab of manifest. You can then get to this larger heap size threshold by Getlargememoryclass (). However, the claim for a larger heap threshold is intended for a small number of applications that consume large amounts of RAM (such as an editing application for a large picture). Don't be so easy because you need to use more memory to request a large heap size. Use the large heap only when you know exactly where to use a lot of memory and know why the memory must be preserved. Therefore, use the large heap property with caution. The use of additional memory space affects the overall user experience of the system and makes the GC run longer each time. The performance of the system is compromised when the task is switched. In addition, the large heap does not necessarily get a larger heap. On some severely restricted machines, the size of the large heap is the same as the usual heap size. So even if you apply for the large heap, you should check the actual heap size by executing getmemoryclass ().

2) Consider the device memory threshold and other factors to design the appropriate cache size

For example, when designing the bitmap LRU cache for a ListView or GridView, the points to consider are:

    • How much memory space does the application have left?
    • How many images will be rendered to the screen at once? How many images need to be cached in advance so that they can be instantly displayed to the screen when quickly sliding?
    • What is the screen size and density of the device? A xhdpi device will require a larger cache than hdpi to hold the same number of images.
    • What is the size and configuration of the different pages for bitmap design, and how much memory will probably be spent?
    • How often is the page image accessed? Is there a part of it that is more frequent than other pictures with higher access? If so, you might want to save those most frequently accessed into memory, or set up multiple LRUCache containers for different groups of bitmaps (grouped by frequency of access).
3) Onlowmemory () and Ontrimmemory ()

Android users can switch quickly between different apps. In order for background applications to quickly switch to Forground, each background application consumes a certain amount of memory. Depending on the memory usage of the current system, the Android system decides to reclaim some of the background's application memory. If the background app is restored directly from the paused state to forground, it will be able to get a faster recovery experience, and if the background app is recovering from the kill state, the comparison is slightly slower, as shown in 17.


Figure 17 The recovery experience is slower from the kill state

    • onlowmemory (): The Android system provides callbacks to inform the current application of memory usage, and generally, when all background applications are killed, the Forground application receives a onlowmemory () callback. In this case, the non-essential memory resources of the current application need to be released as soon as possible to ensure that the system continues to run stably.
    • ontrimmemory (int): The Android system also provides a callback for the Ontrimmemory () starting from 4.0, and when the system memory reaches certain conditions, all running applications receive the callback, and in this callback the following parameters are passed, representing the different memory usage. When the Ontrimmemory () callback is received, it needs to be judged according to the type of parameters passed, reasonable choice to release some of its own memory consumption, on the one hand can improve the overall running smoothness of the system, in addition to avoid being judged by the system as a priority need to kill the application.
    • Trim_memory_ui_hidden: All UI interfaces of your application are hidden, that is, the user clicks the home button or the back key to exit the app, which makes the app's UI completely invisible. At this point, you should release some resources that are not necessary when you are not visible.

When the program is running in the foreground, you may receive one of the following values returned from Ontrimmemory ():

    • trim_memory_running_moderate: Your app is running and will not be listed as a kill. However, when the device is running in a low memory state, the system starts triggering a mechanism to kill the process in the LRU cache.
    • Trim_memory_running_low: Your app is running and not listed as a kill. However, when the device is running in a lower memory state, you should free up unused resources to improve system performance.
    • trim_memory_running_critical: Your app is still running, but the system has already killed most of the processes in the LRU cache, so you should release all non-essential resources immediately. If the system is not able to reclaim enough RAM, the system clears all the processes in the LRU cache and starts killing processes that were previously thought not to be killed, such as the one containing a running service.

You may receive one of the following values returned from Ontrimmemory () when the application process has retreated to the background while it is being cached:

    • Trim_memory_background: The system is running in a low memory state and your process is in the most vulnerable location in the LRU cache list. Although your application process is not in a high-risk state of being killed, the system may have started to kill other processes in the LRU cache. You should release the resources that are easy to recover so that your process can be preserved so that you can recover quickly when the user rolls back to your app.
    • trim_memory_moderate: The system is running in a low memory state and your process is already close to the central location of the LRU list. If the system starts to become more memory intensive, your process is likely to be killed.
    • Trim_memory_complete: The system is running in a low-memory state and your process is in the most easily killed position on the LRU list. You should release any resources that do not affect the recovery status of your app.


Because the callback for Ontrimmemory () is added in API 14, you can use the onlowmemory callback for compatibility with older versions. Onlowmemory quite with Trim_memory_complete.

Note: While the system is starting to purge processes in the LRU cache, it first performs operations in the LRU order, but it also takes into account the memory usage of the process and other factors. The less-occupied process is more likely to be left.

4) Resource files need to select the appropriate folder for storage

We know hdpi/xhdpi/xxhdpi and so on. Images under different DPI folders are treated with scale on different devices. For example, we only placed a picture of 100100 in the hdpi directory, then according to the conversion relationship, XXHDPI's mobile phone to refer to the image will be stretched to 200200. It is important to note that in this case, memory consumption is significantly increased. For images that do not want to be stretched, they need to be placed in the assets or nodpi directory.

5) Try catch operation of some large memory allocations

In some cases, we need to evaluate the code that may have oom in advance, and for these potentially oom code, join the catch mechanism and consider trying a degraded memory allocation operation in the catch. For example decode bitmap, catch to Oom, you can try to increase the sampling ratio by one more times, try to decode again.

6) Use static objects with caution

Because static life cycles are too long and consistent with the application process, improper use is likely to cause an object leak, and the static object (19) should be used with caution in Android.


Figure 19

7) Pay special attention to the unreasonable holding of the single Case object

Although the singleton pattern is simple and practical, it offers many conveniences, but because the life cycle of the singleton is consistent with the application, it is very easy to use the leakage of the object.

8) Cherish Services Resources

If your app needs to use the service in the background, unless it is triggered and performs a task, the service should be in a stopped state at other times. It is also important to note the memory leaks caused by stopping service failures after the service completes the task. When you start a service, the system tends to retain the service and keep the process in place. This makes the process expensive to run because the system has no way to free up the RAM space occupied by the service to other components, and the service cannot be paged out. This reduces the number of processes that the system can store into the LRU cache, which can affect the efficiency of switching between applications, and may even cause system memory usage to be unstable, thus failing to maintain all currently running service. It is recommended to use Intentservice, which will end itself as soon as possible after handling the tasks that have been confessed to it. For more information, please read Running in a Background Service.

9) Optimize layout level, reduce memory consumption

The more flattened the layout of the view, the less memory is used and the more efficient it is. We need to try to ensure that the layout is flat enough to consider using a custom view when the system-provided view cannot be flat enough to achieve the goal.

10) Careful use of "abstract" programming

Many times, developers use abstract classes as "good programming practices," because abstractions can improve the flexibility and maintainability of code. However, abstraction leads to a significant additional memory overhead: they need the same amount of code to execute, and those code is mapping into memory, so if your abstraction doesn't have significant productivity gains, you should try to avoid them.

11) Serialization of data using Nano Protobufs

Protocol buffers is designed by Google for serialized structural data, a language-independent, platform-independent, and well-extensible. Like XML, it's lighter, faster, and simpler than XML. If you need to serialize and protocol your data, it is recommended that you use the nano Protobufs. For more details, please refer to the "Nano version" section of Protobuf Readme.

12) Use the Dependency injection framework sparingly

Using frameworks like Guice or Roboguice to inject code to some extent simplifies your code. Figure 20 is a comparison chart before and after using Roboguice:


Figure 20 using the Roboguice before and after the comparison chart

After using Roboguice, the code is much simpler. However, those injection frameworks perform many initialization operations by scanning your code, which will cause your code to require a lot of memory space to mapping code, and mapped pages will remain in memory for a long time. Unless it is really necessary, it is advisable to use this technique with caution.

13) Use multiple processes with caution

Using multiple processes can run some components of the application in a separate process, which can expand the memory footprint of the application, but this technique must be used with caution, and most applications should not use multiple processes, on the one hand because the use of multiple processes will make the code logic more complex, and if used improperly, It may instead cause a significant increase in memory. When your application needs to run a permanent background task, and this task is not lightweight, you can consider using this technique.

A typical example is creating a music player that can be played back in the background for a long time. If the entire application is running in a process, there is no way to release the UI resources from the foreground while the background is playing. Applications like this can be cut into 2 processes: one for manipulating the UI and the other for the backend service.

14) Use Proguard to remove unwanted code

Proguard can compress, optimize, and confuse code by removing unwanted code, renaming classes, domains and methods, and so on. Using Proguard can make your code more compact, which reduces the memory space required for mapping code.

15) cautious use of third-party libraries

Many open Source library code is not written for the mobile network environment, if used on mobile devices, and not necessarily suitable. Even a library designed for Android requires special care, especially if you don't know what the library has done specifically. For example, one of the libraries uses the Nano Protobufs, and the other one uses the Micro Protobufs. This way, there are 2 ways to implement PROTOBUF in your application. Similar conflicts can occur in output logs, loading images, caches, and so on. Also do not import the entire library for 1 or 2 functions, and if you do not have a suitable library to match your needs, you should consider implementing it yourself rather than importing a chatty solution.

16) Consider different implementations to optimize memory consumption

In some cases, the design of a scenario can quickly achieve the requirements, but this scenario may not be efficient in memory consumption performance. For example:


Figure 21

The simplest way to achieve this is to use a number of dial pictures containing pointers, using frame animations to rotate the pointer. But if you pull the pointer out and rotate it individually, it's obviously a lot less memory than loading n multiple images. Of course, this will increase the complexity of the code, and there is a tradeoff between optimizing the memory footprint and implementing ease of use.

Summary
    • Design style to a large extent affect the memory and performance of the program, relatively speaking, if a lot of style like material design, not only the installation package can be smaller, but also reduce the memory footprint, rendering performance and load performance will be a certain increase.
    • Memory optimization does not mean that the program consumes less memory, the better, if you want to maintain a lower memory footprint, and frequently triggered to perform GC operations, to some extent, it will lead to the overall application performance degradation, there is a need to consider a comprehensive balance.
    • Android memory optimization involves a lot of knowledge: the details of memory management, how garbage collection works, how to find memory leaks, and so on can be expanded to speak a lot. Oom is a prominent point in memory optimization, minimizing the probability of oom is of great significance for memory optimization.

Finally, I am honored to receive CSDN's invitation to attend the MDCC 2015 China Mobile Developer Conference. From the first MDCC, has been concerned about the domestic mobile internet in the field of the developer of the top event, not only from the conference of the major corporate executives and founders of the speech to feel the wave of mobile internet, but also from a lot of technology to the sharing of people learn a very solid dry. I hope this time to participate in the MDCC can be online to meet the technical people, and more mobile development students exchange learning. I wish MDCC 2015 a complete success!

Other references
      • Google I/O 2011:memory management for Android Apps (need to bring your own ladder)
      • Managing Your App ' s Memory
      • Avoiding memory leaks
      • Android Performance Optimization Model-3rd Quarter
      • Android Performance Optimization Model-2nd Quarter
      • Android Performance Optimization Model
      • Android Performance Optimized Memory chapter

"Go" how to avoid oom summary

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.