Android Memory Optimizer Oom

Source: Internet
Author: User
Tags try catch

Android's memory optimization is an important part of performance optimization, and avoiding oom is a central point in memory optimization, a summary of how to avoid oom in memory optimization, most of which is a summary of the practice related to Oom. If you understand the error or deviation, please do not give me any more.

A Android's memory management mechanism

Google has an article on the Android Web site that gives an initial introduction to how Android manages the process and memory allocations for apps: http://developer.android.com/training/articles/memory.html. The Android system's Dalvik virtual machine plays the role of regular memory garbage auto-recycling, the Android system does not provide swap zones for memory, it uses paging and memory-mapping (mmapping) mechanism to manage memory, Here is a brief overview of some of the important memory management fundamentals in some Android systems.

1) Shared memory

The Android system implements shared memory in several ways:

    • The process of Android apps is forked from a process called zygote. The zygote process starts after the system boots and loads the code and resources of the generic framework. In order to start a new program process, the system will fork the zygote process to generate a new process and then load and run the application's code in the new process. This allows most of the RAM pages to be allocated to the framework's code, while allowing RAM resources to be shared across all processes in the app.
    • Most of the static data is mmapped into a process. This not only allows the same data to be shared between processes, but also allows it to be paged out when needed. Common static data includes Dalvik Code,app resources,so files, and so on.
    • In most cases, Android uses an explicit allocation of shared memory areas (such as Ashmem or Gralloc) to implement a mechanism by which dynamic RAM areas can be shared between different processes. For example, Window surface uses shared memory between the app and screen compositor, and Cursor buffers shares memory between content provider and clients.
2) Allocating and recovering memory
    • The Dalvik heap for each process reflects the footprint of using memory. This is the usual logical reference to the Dalvik Heap Size, which can grow as needed, but the growth behavior will have an upper limit set by the system.
    • The logical heap size and the actual physical sense of memory used are not equal, proportional Set size (PSS) records the application itself and the memory shared with other processes.
    • The Android system does not defragment the free memory area in the heap. The system will only determine if the remaining space on the end of the heap is sufficient before the new memory allocation, and if the space is insufficient, it will trigger the GC operation, freeing up more free memory space. In the advanced system version of Android, there is a generational heap memory model for the heap space, and recently allocated objects are stored in the young generation area, when the object stays in this area for a certain amount of time, It will be moved to the old Generation and then be moved to the permanent Generation area for a certain amount of time. The system performs different GC operations based on the different memory data types in memory. For example, objects just assigned to the young generation area are often more likely to be destroyed and recycled, while GC operations in the young generation region are faster than GC operations in the old generation region. As shown in the following:

Each generation memory area has a fixed size, and as new objects are assigned to this area, when the total size of these objects quickly reaches the threshold of this level of memory area, the GC action is triggered to make room for other new objects. As shown in the following:

Typically, when a GC occurs, all threads are paused. The time it takes to perform a GC is also related to which generation it occurs, and every GC operation time in young generation is the shortest, old generation second, Permanent generation the longest. The length of execution is also related to the number of objects in the current generation, and it is much slower to traverse a tree structure to find 20,000 objects than to traverse 50 objects.

3) Limit the memory of your app
    • For the memory control needs of the entire Android system, the Android system sets a hard Dalvik Heap size maximum limit threshold for each application, which varies depending on the size of the RAM on different devices. If your application occupies a memory space that is close to this threshold, then try allocating memory at this point, it can easily cause OutOfMemoryError errors.
    • ActivityManager.getMemoryClass()can be used to query the current app's heap size threshold, this method returns an integer that indicates how many megabytes (Megabates) The heap size threshold is for your app.
4) Apply switch operation
    • The Android system does not swap memory when the user switches the app. Android will place applications that do not contain foreground components in the LRU cache. For example, when a user starts an app, the system creates a process for it, but when the user leaves the app, the process is not destroyed immediately, but is placed in the system's cache, and if the user later switches back to the application, the process can be restored immediately and completely. This enables fast switching of applications.
    • If you have a process in your application that is cached, this process takes up a certain amount of memory space, which can have an impact on the overall performance of the system. Therefore, when the system starts to enter the low memory state, it is determined by the system based on the LRU rules and the priority of the application, memory consumption and other factors of the impact of a comprehensive assessment after the decision whether to be killed.
    • For those non-foreground processes, the Android system is the question of how to determine which processes to kill, please refer to processes and Threads.
Two OOM (OutOfMemory)

We mentioned earlier that the threshold of the Dalvik heap can be obtained by using the Getmemoryclass () method. For a brief snapshot of the memory footprint of an application, refer to the following example (for more memory viewing, refer to this official tutorial: investigating Your RAM Usage)

1) View Memory usage
    • To view memory detail usage from the command line:

    • View real-time changes to the Dalvik heap in memory with Android Studio Memories Monitor

2) Conditions for the occurrence of Oom

About native Heap,dalvik Heap,pss and other memory management mechanisms are more complex, this does not expand the description. To put it simply, different memory allocations (MALLOC/MMAP/JNIENV/ETC) for different objects (BITMAP,ETC) will behave differently due to differences in Android version, native heap and Dalvik The condition of the heap and oom will be affected. On a 2.x system, we can often see that the total value of the heap size significantly exceeds the threshold obtained through Getmemoryclass () without oom, so how do you know if an oom is going to happen for the 2.x and 4.x Android systems?

    • Oom occurs when the Dalvik allocated + external allocated + newly assigned size >= Getmemoryclass () value is in the Android 2.x system GC log. For example, suppose you have a GC log:gc_for_malloc free of Dalvik output, 13% free 32586k/37455k, external 8989k/10356k, paused 20ms, then 32586+8989 + (newly assigned 23975) =65550>64m, oom occurs.

    • Android 4.x system Android 4.x system abolished external counter, similar to bitmap assignment to Dalvik Java heap application, as long as allocated + new allocated memory >= Getmemoryclass () oom occurs, as shown (although the illustration shows the art run environment, the statistical rules are consistent with Dalvik)

(iii) How to avoid the Oom summary

The basic memory management mechanism and the basic knowledge of Oom are described earlier, so what are the guiding rules to be consulted in the practical operation? Summed up, you can start from four aspects, the first is to reduce the memory consumption of objects, followed by the reuse of memory objects, and then to avoid the memory leakage of the object, and finally the memory usage policy optimization.

Reduce the memory footprint of an object

The first step to avoiding oom is to try to minimize the amount of memory that is allocated to new objects and use lighter objects as much as possible.

1) Use more lightweight data structures

For example, we might consider using traditional data structures such as Arraymap/sparsearray instead of HashMap, demonstrating how HashMap works briefly, compared to ARRAYMAP containers written specifically for mobile operating systems by Android, and in most cases are shown to be inefficient and memory-efficient. The usual implementation of HashMap consumes more memory because it requires an additional instance object to record the mapping operation. In addition, Sparsearray is more efficient in that they avoid automatic boxing of key and value autobox, and avoid boxed unpacking.

For more Arraymap/sparsearray discussion, please refer to the first three paragraphs of http://hukai.me/android-performance-patterns-season-3/

2) Avoid using enum in Android

The official Android training course mentions "Enums often require more than twice as much memory as static constants. You should strictly avoid use enums on Android. ", for specific principles please refer to http://hukai.me/android-performance-patterns-season-3/ , so please avoid using the enumeration in Android.

3) Reduce memory consumption of bitmap objects

Bitmap is an extremely easy-to-consume memory fat, it is important to reduce the memory footprint of the created bitmap, usually with the following 2 measures:

    • Insamplesize: Scaling, before loading the image into memory, we need to calculate an appropriate scale to avoid unnecessary large image loading.
    • Decode format: Decoding formats, select Argb_8888/rbg_565/argb_4444/alpha_8, there is a big difference.
4) Use smaller images

In the design of the resource picture, we need to pay special attention to whether the image has a space to compress, whether you can use a smaller picture. Using smaller images as much as possible will not only reduce memory usage, but also avoid a lot of inflationexception. Assuming that a large picture is directly referenced by the XML file, it is possible that the root cause of the problem is an oom when initializing the view because of insufficient memory to inflationexception.

Re-use of memory objects

Most of the reuse of objects, the final implementation of the scheme is to use the object pool technology, or in writing code explicitly in the program to create the object pool, and then handle the reuse of the implementation logic, or use the system framework has some of the reuse features to reduce the duplication of objects to create, thereby reducing memory allocation and recycling.

One of the most commonly used caching algorithms on Android is LRU (Least recently use), and the brief operation principle is as follows:

1) Reuse of the system's own resources

The Android system itself contains a lot of resources, such as strings/colors/Pictures/animations/styles, simple layouts, and so on, which can be referenced directly in the application. This will not only reduce the application's own load, reduce the size of the APK, but also to a certain extent, reduce the cost of memory, reusability is better. However, it is also important to keep an eye on the version differences of the Android system, which are very different from the performance of the various system versions, that do not meet the requirements, or that the application itself is built in.

2) Note the reuse of Convertview in a view of a large number of duplicate subcomponents, such as Listview/gridview

3) Reuse of bitmap objects
    • In controls that display a large number of pictures, such as the ListView and GridView, the LRU mechanism is used to cache the processed bitmap.

    • Use the advanced features of the inbitmap to increase the efficiency of the Android system in bitmap allocation and release (there are some differences in usage limits after 3.0 and 4.4). Using the Inbitmap property, you can tell the bitmap decoder to try to use an existing memory area, and the newly decoded bitmap will attempt to use the pixel data memory area occupied by bitmap in the heap. Instead of asking for memory to reapply an area to store bitmap. With this feature, even thousands of images will only need to occupy the memory size of the number of images that the screen can display.

There are several restrictions that you need to be aware of using Inbitmap:

    • The bitmap size of the reuse must be consistent between the SDK and 18, for example, if the Inbitmap is assigned a picture size of 100-100, then the bitmap of the new application must also be 100-100 to be reused. Starting with SDK 19, the bitmap size of the new application must be less than or equal to the bitmap size already assigned.
    • The new application bitmap and the old bitmap must have the same decoding format, for example, everyone is 8888, if the previous bitmap is 8888, then can not support 4444 and 565 format bitmap. We can create an object pool that contains a number of typical reusable bitmap so that subsequent bitmap creation can find the right "template" to reuse. As shown in the following:

Another point: On a 2.x system, although bitmap is assigned to the native layer, it is still not possible to avoid being counted into the Oom reference counter. Here is a hint, many applications will be reflected bitmapfactory.options inside the Innativealloc to expand the use of memory, but if we do so, the overall system will cause a certain negative impact, it is recommended to adopt carefully.

4) Avoid executing object creation in OnDraw method

Similar to the frequent call method such as OnDraw, it is important to avoid doing the creation of objects here, because he will quickly increase the use of memory, but also easily caused by frequent GC, or even memory jitter.

5) StringBuilder

In some cases, the code will need to use a large number of string concatenation operations, it is necessary to consider the use of StringBuilder to replace the frequent "+".

Avoid memory leaks for objects

The leakage of memory objects can cause some objects that are no longer in use to be released in a timely manner, which takes up valuable memory space and easily leads to an oom when there is insufficient free space to allocate memory. Obviously, this also makes the memory area of each level of generation space smaller, the GC will be more easily triggered, prone to memory jitter, causing performance problems.

The latest Leakcanary open source control, can help us to find out the memory leak situation, more about leakcanary, please see here https://github.com/square/leakcanary (Chinese usage instructions/HTTP/ www.liaohuqiu.net/cn/posts/leak-canary-read-me/). You can also use the traditional Mat tool to find memory leaks, please refer to here http://android-developers.blogspot.pt/2011/03/ memory-analysis-for-android.html (convenient Chinese information http://androidperformance.com/2015/04/11/AndroidMemory-Usage-Of-MAT/)

1) Be aware of activity leaks

In general, the leakage of activity is the most serious problem in memory leak, it occupies a lot of memory, the impact of a wide range, we need to pay special attention to the following two scenarios caused by activity leaks:

    • Internal class references result in activity leaks

The most typical scenario is an activity leak caused by handler, and if there are deferred tasks in handler or if the queue of tasks waiting to be executed is too long, it is possible that the activity will leak because handler continues to execute. The reference chain at this time is Activity, Handler, Message, MessageQueue, Looper. To solve this problem, you can execute the message in the Remove handler message queue with the Runnable object before the UI exits. or use static + WeakReference to achieve a referential relationship between the disconnected handler and the activity.

    • Activity context is passed to other instances, which can cause itself to be referenced and leak.

Internal classes are caused by leaks that occur not only in the activity, but in any other internal class where there is a need for special attention! We can consider using the static type's inner classes as much as possible, while using the weakreference mechanism to avoid leaks that occur as a result of mutual references.

2) Consider using application context instead of activity context

For most situations where the activity context is not required (the context of dialog must be the activity context), we can all consider using application context instead of the activity context , so you can avoid inadvertent activity leaks.

3) Pay attention to the timely recovery of temporary bitmap objects

While in most cases we will increase the caching mechanism for bitmap, some of the bitmap need to be reclaimed in a timely manner. For example, a relatively large bitmap object that was created temporarily, after being transformed to get a new bitmap object, should reclaim the original bitmap as soon as possible, so that the space occupied by the original bitmap can be released faster.

Special attention is given to the CreateBitmap () method provided in the bitmap class:

The bitmap returned by this function may be the same as the source bitmap, and at the time of collection, it is necessary to check whether the source bitmap is the same as the return bitmap reference, and only if the source can be executed in unequal circumstances. The Recycle method of bitmap.

4) Note the logoff of the listener

There are many listeners in the Android program that need register and unregister, and we need to make sure that the listeners are unregister at the right time. Manually add the listener, need to remember to remove this listener in time.

5) Notice the object leaks in the cache container

Sometimes we put some objects into the cache container in order to improve the reusability of objects, but if these objects are not purged from the container in time, it is possible to cause a memory leak. For example, for a 2.3 system, if you add drawable to the cache container, because of the strong application of drawable and view, it is easy to cause the activity to leak. And from 4.0 onwards, there is no such problem. To solve this problem, we need to do a special encapsulation of the cache drawable on the 2.3 system, handle the problem of reference unbind, and avoid the situation of leaking.

6) Notice the leakage of WebView

Android WebView There is a lot of compatibility issues, not only the Android version of the different webview produced a great difference, in addition to different manufacturers shipped ROM inside WebView There is a great difference. More serious is the problem of a memory leak in the standard WebView, see here WebView causes memory Leak-leaks the parent Activity. So the usual way to solve this problem is to open another process for webview, through the aidl and the main process to communicate, webview the process can be based on the needs of the business to choose the right time to destroy, so as to achieve the full release of memory.

7) Note whether the cursor object is closed in time

In the program we often do the query database operation, but there are often inadvertently use the cursor after the case is not closed in a timely manner. These cursor leaks, repeated occurrences of the words will have a great negative impact on memory management, we need to remember to close the cursor object in a timely manner.

Memory usage Policy optimization 1) use large heap sparingly

As mentioned earlier, Android devices have different sizes of memory space depending on the hardware and software settings, and they set a different size heap limit threshold for the application. You can call getMemoryClass() to get the available heap size for your app. In some special scenarios, you can manifest application largeHeap=true declare a larger heap space for your app by adding properties under the tag. Then, you can getLargeMemoryClass() get to this larger heap size threshold by coming up. However, the claim for a larger heap threshold is intended for a small number of applications that consume large amounts of RAM (such as an editing application for a large picture). Don't be so easy because you need to use more memory to request a large heap size. Use the large heap only when you know exactly where to use a lot of memory and know why the memory must be preserved. Therefore, use the large heap property with caution. The use of additional memory space affects the overall user experience of the system and makes the GC run longer each time. The performance of the system is compromised when the task is switched. In addition, the large heap does not necessarily get a larger heap. On some severely restricted machines, the size of the large heap is the same as the usual heap size. So even if you apply for the large heap, you should still getMemoryClass() check the actual heap size that you get by executing it.

2) Consider the device memory threshold and other factors to design the appropriate cache size

For example, when designing the bitmap LRU cache for a ListView or GridView, the points to consider are:

    • How much memory space does the application have left?
    • How many images will be rendered to the screen at once? How many images need to be cached in advance so that they can be instantly displayed to the screen when quickly sliding?
    • What is the screen size and density of the device? A xhdpi device will require a larger cache than hdpi to hold the same number of images.
    • What is the size and configuration of the different pages for bitmap design, and how much memory will probably be spent?
    • How often is the page image accessed? Is there a part of it that is more frequent than other pictures with higher access? If so, you might want to save those most frequently accessed into memory, or set up multiple LRUCache containers for different groups of bitmaps (grouped by frequency of access).
3) Onlowmemory () and Ontrimmemory ()

Android users can switch quickly between different apps. In order for background applications to quickly switch to Forground, each background application consumes a certain amount of memory. Depending on the memory usage of the current system, the Android system decides to reclaim some of the background's application memory. If the background app is restored directly from the paused state to forground, it will be able to get a faster recovery experience, and if the background app is recovering from the kill state, it will appear slightly slower.

    • onlowmemory (): The Android system provides callbacks to inform the current application of memory usage, usually when all background applications are killed. The Forground application receives a callback for the Onlowmemory (). In this case, the non-essential memory resources of the current application need to be released as soon as possible to ensure that the system continues to run stably.
    • ontrimmemory (int): The Android system also provides a callback for the Ontrimmemory () starting from 4.0, and when the system memory reaches certain conditions, all running applications receive this callback. At the same time in this callback will pass the following parameters, representing different memory usage, received ontrimmemory () callback, you need to determine the type of parameters passed, reasonable choice to release some of its own memory, on the one hand, can improve the overall running smoothness of the system, It is also possible to avoid being judged by the system as a priority for applications that need to be killed. Describes the various callback parameters:

    • Trim_memory_ui_hidden: All UI interfaces of your application are hidden, that is, the user clicks the home button or the back key to exit the app, which makes the app's UI completely invisible. At this point, you should release some resources that are not necessary when you are not visible.

When the program is running in the foreground, you may receive one of the following values returned from Ontrimmemory ():

    • trim_memory_running_moderate: Your app is running and will not be listed as a kill. However, when the device is running in a low memory state, the system starts triggering a mechanism to kill the process in the LRU cache.
    • Trim_memory_running_low: Your app is running and not listed as a kill. However, when the device is running in a lower memory state, you should free up unused resources to improve system performance.
    • trim_memory_running_critical: Your app is still running, but the system has already killed most of the processes in the LRU cache, so you should release all non-essential resources immediately. If the system is not able to reclaim enough RAM, the system clears all the processes in the LRU cache and starts killing processes that were previously thought not to be killed, such as the one containing a running service.

You may receive one of the following values returned from Ontrimmemory () when the application process has retreated to the background while it is being cached:

    • trim_memory_background: The system is running in a low memory state and your process is in the most vulnerable location in the LRU cache list. Although your application process is not in a high-risk state of being killed, the system may have started to kill other processes in the LRU cache. You should release the resources that are easy to recover so that your process can be preserved so that you can recover quickly when the user rolls back to your app.
    • trim_memory_moderate: The system is running in a low memory state and your process is already close to the central location of the LRU list. If the system starts to become more memory intensive, your process is likely to be killed.
    • trim_memory_complete: The system is running in a low-memory state and your process is in the most easily killed position on the LRU list. You should release any resources that do not affect the recovery status of your app.

    • Because the callback for Ontrimmemory () is added in API 14, you can use the onlowmemory callback for compatibility with older versions. Onlowmemory quite with Trim_memory_complete.
    • Note: While the system is starting to purge processes in the LRU cache, it first performs operations in the LRU order, but it also takes into account the memory usage of the process and other factors. The less-occupied process is more likely to be left.
4) Resource files need to select the appropriate folder for storage

We know that hdpi/xhdpi/xxhdpi . Images under different DPI folders are treated with scale on different devices. For example, we only placed a picture of 100 in the directory of hdpi, then xxhdpi the mobile phone to refer to the image will be stretched to 200 according to the conversion relationship. It is important to note that in this case, memory consumption is significantly increased. For images that do not want to be stretched, they need to be placed in the assets or nodpi directory.

5) Try catch operation of some large memory allocations

In some cases, we need to evaluate the code that may have oom in advance, and for these potentially oom code, join the catch mechanism and consider trying a degraded memory allocation operation in the catch. For example decode bitmap, catch to Oom, you can try to increase the sampling ratio by one more times, try to decode again.

6) Use static objects with caution

Because static life cycles are too long and consistent with the application process, improper use is likely to cause object leaks, and static objects should be used with caution in Android.

7) Pay special attention to the unreasonable holding of the single Case object

Although the singleton pattern is simple and practical, it offers many conveniences, but because the life cycle of the singleton is consistent with the application, it is very easy to use the leakage of the object.

8) Cherish Services Resources

If your app needs to use the service in the background, unless it is triggered and performs a task, the service should be in a stopped state at other times. It is also important to note the memory leaks caused by stopping service failures after the service completes the task. When you start a service, the system tends to retain the service and keep the process in place. This makes the process expensive to run because the system has no way to free up the RAM space occupied by the service to other components, and the service cannot be paged out. This reduces the number of processes that the system can store into the LRU cache, which can affect the efficiency of switching between applications, and may even cause system memory usage to be unstable, thus failing to maintain all currently running service. It is recommended to use Intentservice, which will end itself as soon as possible after handling the tasks that have been confessed to it. For more information, please read running in a Background Service.

9) Optimize layout level, reduce memory consumption

The more flattened the layout of the view, the less memory is used and the more efficient it is. We need to try to ensure that the layout is flat enough to consider using a custom view when the system-provided view cannot be flat enough to achieve the goal.

10) Careful use of "abstract" programming

Many times, developers use abstract classes as "good programming practices," because abstractions can improve the flexibility and maintainability of code. However, abstraction leads to a significant additional memory overhead: they need the same amount of code to execute, and those code is mapping into memory, so if your abstraction doesn't have significant productivity gains, you should try to avoid them.

11) Serialization of data using Nano Protobufs

Protocol buffers is designed by Google for serialized structural data, a language-independent, platform-independent, and well-extensible. Like XML, it's lighter, faster, and simpler than XML. If you need to serialize and protocol your data, it is recommended that you use the nano Protobufs. For more details, please refer to the "Nano version" section of Protobuf Readme.

12) Use the Dependency injection framework sparingly

Using frameworks like Guice or Roboguice to inject code to some extent simplifies your code. Here is the comparison chart before and after using Roboguice:

After using Roboguice, the code is much simpler. However, those injection frameworks perform many initialization operations by scanning your code, which will cause your code to require a lot of memory space to mapping code, and mapped pages will remain in memory for a long time. Unless it is really necessary, it is advisable to use this technique with caution.

13) Use multiple processes with caution

Using multiple processes can run some components of the application in a separate process, which can expand the memory footprint of the application, but this technique must be used with caution, and most applications should not use multiple processes, on the one hand because the use of multiple processes will make the code logic more complex, and if used improperly, It may instead cause a significant increase in memory. When your application needs to run a permanent background task, and this task is not lightweight, you can consider using this technique.

A typical example is creating a music player that can be played back in the background for a long time. If the entire application is running in a process, there is no way to release the UI resources from the foreground while the background is playing. Applications like this can be cut into 2 processes: one for manipulating the UI and the other for the backend service.

14) Use Proguard to remove unwanted code

Proguard can compress, optimize, and confuse code by removing unwanted code, renaming classes, domains and methods, and so on. Using Proguard can make your code more compact, which reduces the memory space required for mapping code.

15) cautious use of third-party libraries

Many open Source library code is not written for the mobile network environment, if used on mobile devices, and not necessarily suitable. Even a library designed for Android requires special care, especially if you don't know what the library has done specifically. For example, one of the libraries uses the Nano Protobufs, and the other one uses the Micro Protobufs. This way, there are 2 ways to implement PROTOBUF in your application. Similar conflicts can occur in output logs, loading images, caches, and so on. Also do not import the entire library for 1 or 2 functions, and if you do not have a suitable library to match your needs, you should consider implementing it yourself rather than importing a chatty solution.

16) Consider different implementations to optimize memory consumption

In some cases, the design of a scenario can quickly achieve the requirements, but this scenario may not be efficient in memory consumption performance. For example:

The simplest way to achieve this is to use a number of dial pictures containing pointers, using frame animations to rotate the pointer. But if you pull the pointer out and rotate it individually, it's obviously a lot less memory than loading n multiple images. Of course, this will increase the complexity of the code, and there is a tradeoff between optimizing the memory footprint and implementing ease of use.

Written at the end:

    • Design style to a large extent affect the memory and performance of the program, relatively speaking, if a lot of style like material design, not only the installation package can be smaller, but also reduce the memory footprint, rendering performance and load performance will be a certain increase.
    • Memory optimization does not mean that the program consumes less memory, the better, if you want to maintain a lower memory footprint, and frequently triggered to perform GC operations, to some extent, it will lead to the overall application performance degradation, there is a need to consider a comprehensive balance.
    • Android memory optimization involves a lot of knowledge: the details of memory management, how garbage collection works, how to find memory leaks, and so on can be expanded to speak a lot. Oom is a prominent point in memory optimization, minimizing the probability of oom is of great significance for memory optimization.

Oom for Android memory optimization

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.