Some simple causes and solutions for Oom

Source: Internet
Author: User
Tags comparison table hosting

Android Oom sometimes appears very often, which is generally not a problem with Android design, which is generally our problem.

As far as my experience is concerned, the occurrence of oom is mainly the following:

One, the loading object is too large

Second, the corresponding resources too much, not too late to release.

To solve this problem, there are several aspects:

One: To do some processing on memory references, often have soft references, hardening references, weak references

Second: In memory loading pictures directly in memory to do processing, such as: Boundary compression.
Three: Dynamic recovery of memory
Quad: Optimize heap memory allocation for Dalvik virtual machines
V: Custom Heap memory size

Is it really that simple, not necessarily, look I have to explain:

Soft references (softreference), virtual references (phantomrefrence), weak references (WeakReference), these three classes are the application of Java objects in the heap, through which the three classes can interact with the GC in a simple, In addition to these three, there is one of the most commonly used strong references.

Strong references, such as the following code:

Object O=new object ();
Object o1=o;

The first sentence in the above code is to create a new object object in the heap by referencing the object by O, and the second sentence is a reference to the object in the heap using O to build O1 to new object. Both references are strong references. The GC does not collect the object as long as there is a reference to the object in the heap. If the following code is used:

O=null;

O1=null;

Objects in the heap have strong and accessible objects, soft and accessible objects, weak and accessible objects, virtual objects, and unreachable objects. The strength, weakness, and weakness of the application are strong, soft, weak, and virtual. The most powerful reference to the object that the object belongs to. As follows:

String Abc=new string ("abc");  1 softreference<string> abcsoftref=new softreference<string> (ABC); 2 weakreference<string> abcweakref = new weakreference<string> (ABC); 3 abc=null; 4 abcsoftref.clear ();//5 in this case, the object referred to by this Reference can be obtained through get (), and if the return value is NULL, this object has been cleared. Such techniques are often used in the design of programs such as Optimizer or Debugger, because such programs require information about an object, but it is not possible to influence the garbage collection of this object.

Virtual reference
There is no meaning, after a virtual reference is made, the result returned by the Get method is always null, and through the source code you will find that the virtual reference leads to the object that writes the reference to referent, except that the Get method returns the result as null. Let's take a look at the process of interacting with the GC and talk about his role.
Without setting referent to NULL, the new String ("ABC") object in the heap is set to be terminated (finalizable) directly.
Unlike soft and weak references, the Phantomrefrence object is added to its referencequeue. Then the virtual object is released. You'll find that you can do something else before you collect the new String ("ABC") object in the heap. The following code can be used to understand his role.

Although these common references can make their GC recyclable, the GC is not very intelligent, so the oom is free.

Second: The image is loaded in memory directly in memory to do processing.

Try not to use Setimagebitmap or Setimageresource or Bitmapfactory.decoderesource to set a large image, because these functions are completed decode, and ultimately through the Java layer of CreateBitmap to complete, you need to To consume more memory.

Therefore, instead of using the Bitmapfactory.decodestream method first to create a bitmap, and then set it to ImageView Source,decodestream the biggest secret is its direct call jni>> Nativedecodeasset () to complete the decode, eliminating the need to use the Java layer CreateBitmap, thus saving the Java layer of space.

If the config parameter is added to the read, it can be used to reduce the loaded memory effectively, so as to effectively prevent the throw out of the memories exception, in addition, Decodestream directly take the picture to read the bytecode, not according to the machine's various resolutions to automatically adapt, After using Decodestream, you need to configure the corresponding picture resources in hdpi and mdpi,ldpi, otherwise the same size (number of pixels) on different resolution machines, the size of the display is not right.

In addition, the following ways can help:

InputStream is = This.getresources (). Openrawresource (R.DRAWABLE.PIC1);      Bitmapfactory.options options=new bitmapfactory.options ();      Options.injustdecodebounds = false;      Options.insamplesize = ten;   Width,hight is set to the original very one      Bitmap BTP =bitmapfactory.decodestream (is,null,options);   if (!bmp.isrecycle ()) {          bmp.recycle ()/   /Reclaim the memory of the picture          System.GC ()  //Remind the system to recycle it in time}   

Here's a way to read a picture of a local resource in the most memory-saving way

/**  *   * @param context  * @param resId  * @return  *    /public static Bitmap Readbitmap (context context, int resId) {    bitmapfactory.options opt = new bitmapfactory.options ();    Opt.inpreferredconfig = Bitmap.Config.RGB_565;    Opt.inpurgeable = true;    Opt.ininputshareable = true;    Get the resource picture    InputStream is = Context.getresources (). Openrawresource (resId);    Return Bitmapfactory.decodestream (is,null,opt);    }

When I put a picture on the simulator yesterday, gallery java.lang.OutOfMemoryError:bitmap size exceeds VM budget exception, the image size exceeds RAM memory.

Simulator RAM is relatively small, only 8 m of memory, when I put in a large number of pictures (about 100 more than K), the above reasons appear.
Since each picture was previously compressed, when placed into the bitmap, the size will become larger, resulting in more than RAM memory, the specific solution is as follows:

"Java//fix loading Picture memory overflow problem                     //options only save picture size, do not save picture to memory                 bitmapfactory.options opts = new Bitmapfactory.options ( );                 Scaling, scaling is difficult to scale at the prepared scale, its value indicates the scale of the multiplier, the SDK recommended that its value is 2 of the exponential value, the more the value of the general cause the picture is not clear                 opts.insamplesize = 4;                 Bitmap bmp = null;                 BMP = Bitmapfactory.decoderesource (Getresources (), mimageids[position],opts);                                               ...                               Recycling                 bmp.recycle ();  



Solved by the way above, but this is not the most perfect solution.

Through some understanding, learned as follows:

Optimize heap memory allocation for Dalvik virtual machines

For Android platforms, the Dalvik JAVAVM used by its hosting layer has many things to optimize for its current performance, such as the possibility of manually interfering with GC processing in the development of some large-scale games or resource-consuming applications. Using the Settargetheaputilization method provided by the Dalvik.system.VMRuntime class can enhance the processing efficiency of the program heap memory. Of course, we can refer to the specific principle of open source engineering, here we only say how to use: Private final static floattarget_heap_utilization = 0.75f; You can call Vmruntime.getruntime (). Settargetheaputilization (Target_heap_utilization) when the program OnCreate.

Android heap memory can also define its own size

For some Android projects, the main impact of the performance bottleneck is Android's own memory management mechanism, the current handset manufacturers are stingy with RAM, for the smoothness of software, RAM is very sensitive to the performance, in addition to optimizing the Dalvik virtual machine heap memory allocation, We can also enforce the memory size of our own software, and we use the Dalvik.system.VMRuntime class provided by Dalvik to set the minimum heap memory as an example:

Private final static int cwj_heap_size = 6* 1024* 1024;

Vmruntime.getruntime (). Setminimumheapsize (Cwj_heap_size); Sets the minimum heap memory size to 6MB. Of course, the memory crunch can also be handled by manually interfering with the GC.

Bitmap setting picture size to avoid memory overflow OutOfMemoryError optimization method


★android is easy to use when using bitmap memory overflow, reported the following error: Java.lang.OutOfMemoryError:bitmap size exceeds VM budget

The main addition is this paragraph:

Bitmapfactory.options Options = new Bitmapfactory.options ();
Options.insamplesize = 2;

EG1: (Take a picture by URI)

Private ImageView Preview; Bitmapfactory.options Options = new Bitmapfactory.options ();                     Options.insamplesize = 2;//Picture width is the original One-second, that is, the picture is the original One-fourth                     Bitmap Bitmap = Bitmapfactory.decodestream (cr                             . Openinputstream (URI), null, options);                     Preview.setimagebitmap (bitmap);   


The above code can optimize memory overflow, but it only changes the image size, and does not completely resolve the memory overflow.
EG2: (Through the path to the picture)

Private ImageView Preview; Private String filename= "/sdcard/dcim/camera/2010-05-14 16.01.44.jpg"; Bitmapfactory.options Options = new Bitmapfactory.options ();                 Options.insamplesize = 2;//Picture width is the original One-second, that is, the picture is the original One-fourth                         Bitmap b = bitmapfactory.decodefile (fileName, options);                         Preview.setimagebitmap (b);                         

In this way, you can compress enough proportions, but not for small-memory phones, especially those that are 16mheap.

Three. Dynamically allocating memory

Dynamic memory Management DMM (Management) is a direct allocation of memory and reclaimed memory from the heap.

There are two ways of implementing dynamic memory management.

One is to display the memory management Emm (Explicit Management).
In EMM mode, memory is allocated from the heap and is manually reclaimed after it is exhausted. The program assigns an array of integers using the malloc () function and frees the allocated memory using the free () function.

The second is the Automatic memory management Amm (Automatic memories Management).
AMM can also be called a garbage collector (garbage Collection). The Java programming language implements the AMM, unlike Emm, where the Run-time system focuses on the allocated memory space and, once it is no longer used, recycles it immediately.

Whether EMM or AMM, all heap management plans face some common problems and previous flaws:
1) Internal fragment (Internal fragmentation)
When the inside is wasted, the internal fragments appear. Because memory requests can cause the allocated memory block to be too large. For example, request 128 bytes of storage space, the result run-time system allocated 512 bytes.

2) External fragment (External fragmentation)
When a series of memory requests leaves several valid blocks of memory, the size of these memory blocks does not meet the new request service, and external fragmentation occurs.

3) Location-based delay (location-based Latency)
Latency problems occur when two data values are stored far apart, resulting in increased access times.

Emm is often faster than AMM.
Emm vs. Amm comparison table:
——————————————————————————————————————
EMM AMM
——————————————————————————————————————
Benefits smaller, faster, easier to control stay focused on domain issues
Costs complex, bookkeeping, memory leaks, good performance of pointers floating
——————————————————————————————————————

The early garbage collector is very slow and often consumes 50% of execution time.

The garbage collector theory was generated in 1959, when Dan Edwards implemented the first garbage collector in the development of Lisp programming languages.

The garbage collector has three basic classic algorithms:

1) Reference counting (reference count)
The basic idea is that when an object is created and assigned a value, the reference counter of the object is set to 1, and the reference count is +1 whenever the object assigns a value to any variable, and once it exits the scope, it references the Count-1. Once the reference count becomes 0, the object can be garbage collected.
Reference counting has its advantages: for the execution of a program, it takes only a small chunk of time to execute each operation. This is a natural advantage for real-time systems that cannot be interrupted too long.
However, there are also shortcomings: it is not possible to detect loops (two objects referencing each other), and to compare time to each increment or decrease in the number of citations.
In modern garbage collection algorithms, reference counts are no longer used.

2) Mark-sweep (Mark clean up)
The basic idea is to look for all references (called Live objects) from the root set each time, and then mark each one, and when the trace is complete, all the unmarked objects are garbage that needs to be recycled.
Also called the tracking algorithm, based on the tag and cleared. This garbage collection step is divided into two stages: during the tagging phase, the garbage collector traverses the entire reference tree and marks each encountered object. During the purge phase, unmarked objects are freed and made available in memory.

3) Copying collection (copy collection)
The basic idea is that the memory is divided into two pieces, one is currently in use, and the other is currently unused. Each time the allocation is using memory that is currently in use, when no memory is available, the memory of the zone is marked, and all the tagged objects are copied to the current unused memory area, which is the reverse two zone, that is, the current availability zone becomes currently unused, and the current unused becomes currently unavailable, and the algorithm continues to execute.
The copy algorithm needs to stop all program activity and then start a lengthy and busy copy job. This is the disadvantage of the place.

There are also two algorithms in recent years:

1) Generational garbage collection (sub-generational)
The idea is based on:
(1) Most objects created by most programs have a very short lifetime.
(2) Some objects created by most programs have a very long lifetime.
The main disadvantage of the simple copy algorithm is that they spend more time copying some long-lived objects.
The basic idea of the generational algorithm is to divide the memory area into two pieces (or more), one representing the younger generation and the other representing the old generation. For different characteristics, the younger generation of garbage collection is more frequent, the collection of the old generation is less, every time after the young generation of garbage collection will always have not been collected live objects, these living objects are collected to increase the maturity, when the maturity reaches a certain level, it is placed into the old generation of memory block.
The generational algorithm is very good to realize the dynamic nature of garbage collection, while avoiding the memory fragmentation, which is the garbage collection algorithm used by many JVMs at present.

2) Conservative garbage collection (Conservative)

Which algorithm is best? The answer is no best.

As a very common garbage collection algorithm, EMM has 5 basic methods:
1) Table-driven Algorithms
The table-driven algorithm divides the memory into a fixed-size collection of blocks. These blocks are indexed using an abstract data structure. For example, a bit corresponds to a block, with 0 and 1 to indicate whether or not to allocate. Disadvantage: bit mapping relies on the size of the memory block; In addition, searching a series of free memory blocks may require searching the entire bit mapping table, which affects performance.

2) Sequential fit
The sequential adaptation algorithm allows the memory to be divided into different sizes. This algorithm tracks the allocated and idle heap, marking the start and end addresses of the free blocks. It has three seed categories:
(1) First fit--allocates the first block found to fit the memory request
(2) Best fit--allocating blocks that best fit memory requests
(3) Worst fit (least suited)--allocate the largest block to memory request

3) Buddy Systems
The main purpose of the Buddy systems algorithm is to speed up the rate of consolidation that has been released within a distribution. Display memory management EMM using the Buddy systems algorithm may result in internal fragmentation.

4) Segregated storage
Isolated storage technology involves dividing the heap into multiple regions (zones) and adopting different memory management plans for each region. This is a very effective method.

5) Sub-allocators
The sub-configuration technology attempts to resolve memory allocation issues that are allocated large chunks of memory under the Run-time system and are managed separately. In other words, the program is fully responsible for the memory allocation and recycling of its own private storage heap (stockpile) without the help of the run-time system. It may bring additional complexity, but you can significantly improve performance. In the 1990 "C Compiler Design", Allen Holub made great use of sub-allocators to speed up the implementation of its compilers.

Note that the display memory management EMM must be flexible and able to respond to several different types of requests.

Finally, use EMM or use AMM? This is a religious question, with personal preference. The EMM achieves speed and control under complex overhead. Amm sacrificed performance, but in exchange for simplicity.

Whether the Emm,amm allocates memory, there is an oom problem, because the loading of memory is too large is unavoidable.

Four. Optimize heap memory allocation for Dalvik virtual machines

For the Android platform, the Dalvik Java VM used by its hosting layer has many things to optimize for its current performance, such as the possibility of manually interfering with GC processing in the development of some large-scale games or resource-consuming applications. Using the Settargetheaputilization method provided by the Dalvik.system.VMRuntime class can enhance the processing efficiency of the program heap memory. Of course, we can refer to the specific principle of open source engineering, here we only say how to use: Private final static float target_heap_utilization = 0.75f; You can call Vmruntime.getruntime (). Settargetheaputilization (Target_heap_utilization) when the program OnCreate.

Android heap memory can also define its own size

For some Android projects, the main impact of the performance bottleneck is Android's own memory management mechanism, the current handset manufacturers are stingy with RAM, for the smoothness of software, RAM is very sensitive to the performance, in addition to optimizing the Dalvik virtual machine heap memory allocation, We can also enforce our own software's heap memory size, and we use the Dalvik.system.VMRuntime class provided by Dalvik to set the minimum heap memory as an example:

Private final static int cwj_heap_size = 6* 1024* 1024;

Vmruntime.getruntime (). Setminimumheapsize (Cwj_heap_size); Sets the minimum heap memory size to 6MB. Of course, the memory crunch can also be handled by manually interfering with the GC.

Note that this setting Dalvik the configuration of the virtual machine is not valid for the Android4.0 setting.

That's my point of view on Android's Oom.

Some simple causes and solutions for Oom

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.