Https://developer.android.com/training/articles/memory.html#Android
对于任何软件来说RAM都是一个非常重要的资源,但是由于物理内存总是有限的,所以内存对于手机操作系统来说也更加重要。尽管Android的Dalvik虚拟机会执行GC,但是仍然不允许忽略应该在什么时候,什么地方分配和释放内存为了垃圾回收器能够回收app的内存,需要避免内存泄露(通常是由全局变量持有对象引用引起)和在合适的时候释放引用的对象(如下面会说到的生命周期的回调)对于大部分的app,当应用活动线程相应的对象离开了作用域时,Dalvik垃圾回收器会回收分配的内存这篇文章解释了Android是怎么管理app进程和内存分配的,和Android开发时应该主动的减少内存的使用,用Java编程时更多关于清理内存的一般实践可以参考其它书本或者在线文档关于管理资源引用的说明,如果你已经创建了一个工程并且正在寻找应该怎样分析你应用的内存,可以参考 [调查应用内存情况](https://developer.android.com/studio/profile/investigate-ram.html)
How does Android manage memory?
Android does not provide the swap of memory, but it can use paging and memory mapping to manage memory, which means that any memory you modify, whether it's allocating new objects or mapping pages, will reside in memory and not be removed, so the only way to completely free memory from the app is to release the object references that you might hold, Allows the garbage collector to recycle normally. This leads to a potential exception: when the system memory is tight, any files that are mapped but not modified, such as code, are removed from memory
Shared memory
To fit Android's memory requirements, Android shares memory pages through the process, which can be done in the following ways:
Each app process is forked out of the existing zygote process, and the zygote process starts when the system starts and loads the generic framework code and resources (such as the activity theme), so in order to open an app process, the system will fork The zygote process then loads the code that runs the app in the new process. This will require allocation to the framework code and resources Most of the memory needs to be shared by all processes
Most of the static data is mapped to the process, which not only allows the same data to be shared between processes, but is removed when necessary. For example, static data includes: Dalvik code (pre-linked. odex files for direct mapping), app resource files (by designing a resource table that can be mapped directly or aligning apk zip entries) and traditional engineering elements such as. So local files
Many times, Android uses display memory allocations (ASHMEM or gralloc) to share dynamic memory across processes, such as system surface sharing memory between apps and screen compositor, and cursor buffers in content Shared memory between provider and clients
Due to the large use of shared memory, it is necessary to pay special attention to the use of application memory, and to determine the application of memory usage in the survey application memory situation
Request and Reclaim memory
Here are some examples of how Android is allocating and recycling applications
The Dalvik stack for each process is limited to a virtual memory range, which defines the logical stack size that can grow as it needs (but only grows to the system's upper limit for each app)
The logical stack size differs from the amount of physical memory used by the stack, and when the application stack is checked, the Android accountant calculates a value called proportional Set Size (PSS), which contains dirty and clean pages that are shared by other processes. But it is also prorated according to how many apps share memory. The total PSS size is what the system thinks of your app physical memory size, more PSS instructions, reference Survey application memory situation
Limit the memory of your app
To ensure a multitasking environment, Android sets the heap size for each app, and the exact heap size varies with each device's available memory, so there's a difference when the app reaches the stack cap and tries to request more memory, and you get a system outofmemoryerror error.
In some cases, you may need to query the system to determine the exact value of the memory that you have on your device, for example, to determine how much data should be cached, call the getmemoryclass query system to get this data, It returns an integer value in megabytes for the available memory of the app, which is discussed below to check how much memory should be used
Switch Apps
When the user switches the app, Android does not switch the memory space, but instead switches the process app in Forground to an LRU cache, for example, when the user starts the app for the first time, it creates a process for it, but when the user leaves the app, the process does not stop, Instead, the system caches the process, so the process can be quickly reused when the user returns to the app later
If the app has a cached process, and it holds the memory that is not needed now, it can affect the overall performance of the system even if the user is not in use, so it kills the least recently used process in the LRU cache queue when the system memory is tight, but also considers killing the most memory-intensive processes. To ensure that the process is kept as long as possible in the background, listen to the following suggestions about when to release references
More about when the app is not in the forground process is how it is cached and how Android is deciding which process to kill for reference processes and threads
How the app should manage memory
The limitations of RAM are taken into account at all stages of development, including the design of the app prior to development, and there are many ways you can design and write code for more efficient results, even though the application gathers more and more of the same technologies
When designing and implementing the app, try to follow the recommendations below to make the memory more efficient.
Using Services with caution
If the app needs a service to perform tasks in the background, don't let it run unless it is actually performing a task, handle the leak of the service when it ends because there is no shutdown
When you start a service, the system tries to keep the service process running properly, which makes the process expensive because the service's RAM is not used and replaced by anything else, and it also causes the system to be stored in the LRU The number of processes in the cache queue is reduced, making app switching inefficient, even when memory is tight, causing system jitter, and the system may not be able to maintain enough processes to host all services that are currently running
The best way to limit the service's life is to use Intentservice, which will end itself when you have completed the intent that opens it, more information can refer to running a background service
The worst memory management bug that Android apps can make is to keep an already unwanted service running, so don't keep running a service implementation for your app, because memory limits will not only increase the risk of your app running inefficiently. will also cause users to discover such bad behavior when uninstalling the app
Free memory when your user interface is no longer in the foreground
When the user switches to another app and your UI is no longer visible, the app frees up all the resources that your UI consumes, freeing up the UI resources to increase the system caching process and can directly affect the quality of the user experience
In order to inform when the user left your UI and implement the activity's ontrimmomory () callback, use this method to listen to Trim_memory_ui_hidden, it shows that your UI is hidden, You should release just the resources that your UI occupies
It is important to note that the app receives Trim_memory_ui_hidden 's ontrimmemory () only when your process is hidden from the user, unlike the OnStop () callback, OnStop () Callbacks are when a user switches to another activity in the app when the activity instance is hidden. So although you can implement a onstop () instance to release the activity's resources like a network connection or unregister broadcast receivers, you should not release your UI resources unless you receive ontrimmemory (Trim_memory_ui_ HIDDEN), which ensures that your UI resources are still available if the user returns from another activity so that the activity can be quickly visible
Free memory when memory is tight
For each phase of the app lifecycle, the Ontrimmemory () callback tells us when the device is in a low memory state, and we should further release the memory resources when we receive ontrimmemory ()
- Trim_memory_running_moderate
The app is running and the system is not considering killing it, but the device is running out of memory and the system is killing the LRU cache process
The application is running and the system is not considering killing it, but the device is running out of memory, so the unused resources should be freed up to improve the efficiency of the system (because it directly affects the efficiency of the app)
- Trim_memory_running_critical
The application is running, but the system has killed most of the LRU cache processes, so all resources that are not in the critical resource should be freed. If the system does not reclaim enough memory, it cleans up all the LRU cache processes and some of the processes that the system tends to maintain, such as those that maintain service
Similarly, when the app process is currently being cached, you may receive the following callbacks for Ontrimmemory ()
System memory is tight and your process is in front of the LRU list, although your app process is not at risk of being killed by the system, but the system may still be killing the LRU cache process, you should release some easily recoverable resources so that your process can remain in the LRU list. And when the user returns to the app you can quickly recover
The system runs out of memory and your process is in the middle of the LRU list, and the system begins to release further memory, and your process has a chance of being killed.
The system is running out of memory and your process is the first to be killed if the system does not recover memory, you should release all resources that do not seriously affect your app recovery
Since the Ontrimmemory () callback is added after API 14, the lower version can be used with the onlowmemory () callback, and the lower version of the callback is equivalent to the Trim_memory_complete event
Note: When the system starts to kill the process in the LRU list, although it is a top-down job, it also takes into account which process consumes more memory and if it kills which will provide more memory to the system, so in the LRU list you consume less memory and the more chances you have to stay in the list, and easier to recover quickly by users
check how much memory should be used
As mentioned above, each Android device has a different RAM size for the system, which also results in a different stack size for each app, and can call Getmemoryclass () to get the app's available heap size in megabytes. If the app tries to request more than the available memory size, the system will report OutOfMemoryError errors
In special cases, you can set the largeheap property to True for a large heap size in the manifest label, and if so, you can call Getlargememoryclass () to get the approximate large heap size value.
However, the ability to apply for a large heap is only for apps that can prove that more RAM is needed (such as a big-picture editing app). not just because you're running out of memory, so go for a bigger heap of memory , just when you know exactly where your memory is being allocated and why it has to be kept. However, even if you can prove that your app is using the large heap properly, you should avoid applying it whenever you need to extend it, and using extended memory can compromise the overall user experience, because the garbage collector takes longer and system execution slows down such as task switching or some other common execution
In addition, the large heap is not the same on different devices, and when running on a memory-tight device, the large heap may be the same as the normal heap size, so even if you apply for the large heap, you should call Getmemoryclass () Check the normal heap size and try to lower the limit
Avoid bitmap of memory waste
When you load a bitmap, only need to keep the current screen resolution in memory, if bitmap is a higher resolution to scale it, to know is that the growth of bitmap resolution represents the growth of memory, because the size of x and Y is increasing
Note: Under Android2.3.x (API level 10), bitmap objects always appear the same size in the app heap ignoring the picture resolution (the actual pixels are stored in local memory). This makes debugging bitmap memory allocations very difficult because most of the heap analysis tools cannot see local memory allocations. However, after Android 3.0 (API level 11), Bitmap's pixel data is allocated by the app's Davlik heap, improving garbage collection and debugging efficiency. So if your app uses bitmaps and you can't find out why your app is using some memory on some old devices, you can switch to Android3.0 more than the device debug debug
More reference management for bitmap processing bitmap memory
Using the optimized data container
Using the Android framework to optimize for easy, such as Sparearray,sparebooleanarray, Longsparsearray. Usually the HashMap implementation is very memory-intensive because it requires the creation of entry objects for each mapping, but the Sparsearray classes are very efficient because they avoid the system's automatic boxing of key and some value ( It creates a new object or two entry) and does not have to worry about converting to the original arrays when it is meaningful data
Note Memory Overhead
Understand the cost of your language and library of links, and remember that when you design your app from start to finish, often seemingly harmless things on the surface can actually be very expensive such as:
Enums typically requires more than twice times more memory than static variables, and should be strictly avoided in Android enums
Each Java class (including an abstract inner class) uses approximately five bytes of code
Each class instance spends 12-16bytes of memory
Placing a single entry in the HashMap requires an additional 32bytes entry object (in detail you can see the optimized data container)
A few bytes here and there quickly add Up-app designs that is class-or Object-heavy would suffer from this overhead. This can cause you to be in an awkward position: you see a lot of small objects in the heap analyzer that occupy your memory
Note the use of abstract code
Often, developers will think of abstractions as good code practices, because abstractions can improve the flexibility and maintainability of programs, but abstractions have a big cost: they often require more code to execute, and more time and memory to map code to memory. So if the abstraction does not bring significant benefits, you should avoid using it
Providing nano Protobufs for serialized data
Protocol buffers is a language independent of Google's design, platform-independent, extensible serialization of structured languages-such as XML, but smaller, faster, and simpler. If you decide to use protocol buffer data, you should use the Nano Protobufs in the client code. Ordinary protobufs generate particularly verbose code, which leads to a variety of problems with the app: increased memory footprint, apk size growth, slow execution, and fast Dex limit
For more information, please see PROTOBUF Readme
Avoid dependency Injection frameworks
Using a dependency injection framework, such as Guice or roboguice, can be very appealing because they make the code you write simple and provide an adaptive environment for testing or other configuration changes. However, these frameworks generate a lot of processing flow by scanning your code for comments, which requires a large amount of code to be mapped into memory. Although you don't need it. These map pages are assigned to clean memory so that Android can reclaim them, but it will not be recycled until it has been in memory for a long time.
Use third-party libraries with caution
Many third-party libraries are not written for the mobile environment, so it can be very inefficient if we use them for our clients. At least when you decide to use a third-party library, you should take into account that you will have important porting and maintenance burdens on these libraries. Plan ahead and analyze the code size and memory footprint of these libraries before deciding to use them
Even libraries designed specifically for Android are potentially risky because each library may be completely different in code writing, such as a library that uses nano Protobufs but another library that uses micro PROTOBUFS, Now there are two different PROTOBUF implementations in your app. The same is true for log, analysis, picture loading, caching, and other different implementations that you might not expect. Proguard also can't save you, because these are the features that rely on the underlying library. This problem becomes more serious when you use a subclass of the activity of a library (meaning there will be a lot of dependencies) when the dependent Library has reflection (which is very common, which means you will spend a lot of time adjusting proguard to make the library work properly), etc.
Be careful not to fall into a trap that uses only one or two features of a shared library, but other features are useless. Because you don't want to introduce a lot of code and memory overhead that you don't even use. Finally, if there is no existing implementation that exactly matches your needs, the best way is to implement it yourself.
Use Proguard to remove useless code
Proguard's tools are designed to compress, optimize, and obfuscate code by removing useless code and ambiguous names to rename class names, field names, method names, and so on. Use Proguard to make your code more compact and use fewer memory-mapped pages
Use Zipalign for the final apk
If you need to do any processing for the APK generated by the system (including signing with your production certificate), you have to zipalign the APK, and if you do not execute Zipalign, your app will need more memory because the resource file is not mapped from the APK.
Note: Google Play store does not accept apk with no zipaligned
Analyze memory consumption
Once your app has reached a relatively stable state, start analyzing how much memory your app consumes in its entire lifecycle. For information on how to analyze your app, read the survey application memory situation
Using multi-process
If it works for your app, an advanced technology can help you manage the memory of your app and divide the app's components into different processes. This technique is often useful, but most apps should not run in multiple processes , because it can easily make app memory grow rather than decrease once it is improperly manipulated. It is very useful for apps to run important work in front of the background and to differentiate these operations.
One example is suitable for multi-process operations, such as a music player that requires a service to play music for a long time. If the entire app is running in a single process, assigning a lot of resources to the activity UI must be as long as the music is playing, even though the user has switched to another app but the service is still controlling playback. This app is best used for two processes, one for the UI, and the other for continuous running of the background service
You can set the Android:process property for the component in the manifest file to set a separate process. For example, you can set up a service that should run a process separately instead of declaring a new process such as ' background ' in the main process (the name can be set to whatever name you like)
<service android:name=".PlayService" android:progress=":background">
The process name should start with ":" to ensure that this process is the private process of the app
Before you create a process, you need to understand the impact on memory. To illustrate the impact of each process, it is necessary to know that an empty process does nothing to consume 1.4MB of memory, like the information shown below
Note: More information about how these data should be read is applied in memory situations. Here the key data is private dirty and private clean memory, which shows that the process consumes approximately 1.4M of non-pageable memory ( Allocation of Dalvik heap, native allocation, library save and load) and 150K mapping for code to be executed
This memory footprint is important for an empty process because it can react quickly when you want to do something in that process. For example, this is a process memory footprint that is used only to display an activity that contains some text
This process takes up about three times times the size of 4M, just showing some text on the UI. This leads to an important conclusion: if you want to design your app as multiple processes, there should be only one process for the UI, and other processes should avoid using any UI, because the process will lead to rapid memory growth (especially when you start loading bitmap resources and other resources). Reducing memory consumption can become very difficult when the UI is drawn
Also, when running multiple processes, it is more important to keep the code tidy than usual, as any unnecessary memory overhead for any generic implementation will be duplicated in each process. For example, if you use enums (although you should not use enums). All the memory needs to be created and initialized, replicated in each process, and any adapter and temporary other abstraction overhead will also be replicated
Multiple processes also need to consider the dependencies that exist between them. For example, if your app is running content provider in the default process while hosting the UI, then use that content provider code to run in a background process that requires your UI process to be saved in memory. If your goal is to have a background process that can run independently of a heavyweight UI process. It cannot rely on the content provider and service that the UI process executes
Manage the memory of your app