Original: http://tivan.iteye.com/blog/1487855
Objective
In the usual development, testing process, or even production environment, sometimes encountered Outofmemoryerror,java heap overflow, which indicates that the program has serious problems. We need to find the cause of outofmemoryerror. There are generally two situations:
1, memory leaks, the object is dead, can not be automatically recycled through the garbage collector, by finding out the location of the code and the cause of the leak, to determine the solution;
2, memory overflow, in-memory objects must also exist, which indicates that the Java heap allocation space is insufficient, check the heap size (-XMX and-XMS), check the code for the existence of the object life cycle is too long, hold the state of the situation too long.
The above is the idea of handling Java heap problems, how to analyze them, and here is the process of analyzing using the Eclipse Memory Analyzer tool (MAT) tools.
Generate dump File
The JVM parameter--xx:-heapdumponoutofmemoryerror allows the JVM to dump the current memory dump snapshot in the event of a memory overflow;
Alternatively, using Jmap to produce the dump file, win through the Task Manager to view the Tomcat process pid,linux with the PS command to view the process PID, and then with the Jmap command (java5:jmap-heap:format=b <pid> ; Java6:jmap-dump:format=b,file=heapdump.bin <pid>).
I use here, I am a production environment project, run for a period of time about 3 weeks of appearance, will report OutOfMemoryError. (PS: This project has been around for a long time, and our previous practice is to restart Tomcat on a regular basis, not to analyze it for reasons.) ) JDK64 bit main parameter:-xmx3078m-xms3078m-xx:permsize=1024m-xx:maxpermsize=1024m, the memory is still quite big.
Mat Installation and Introduction
: http://www.eclipse.org/mat/downloads.php.
Open the dump memory file through the mat and open it as follows:
Most of its features can be seen from.
1. Histogram can list objects in memory, number of objects, and size.
2. The Dominator tree can list the space that the thread is occupying, and the objects under the thread.
3.Top consumers lists the largest object by graph.
4.Leak suspects automatically analyzes the cause of leaks via MA. Histogram: Objects: The number of objects of the class. Shallow size: The size of the object itself, which does not contain a reference to other objects, that is, the sum of the object header plus the member variable (not the value of the member variable). Retained size: Is the object's own shallow size, plus the sum of the shallow size from which the object can be accessed directly or indirectly to the object. In other words, the retained size is the sum of the memory that the object can be recycled to after it has been GC. We found that the objects of the threadlocal and Bingo.persister.dao.Daos classes took up a lot of space.
Dominator Tree: We found that the quartz timer's working thread (10) accounted for a lot of memory space
Top consumers such as: This shows what the largest objects in memory are, what classes they correspond to, and what class loaders are classloader. There are times where we can see where the code leaks.
Leak suspects such as: From that pie chart, the dark area of the figure is suspected of a memory leak, you can find the entire heap 250M of memory, dark areas accounted for 34%. The following description tells us that the quartz thread takes up a lot of memory and indicates that the system class loader loaded "Java.lang.ThreadLocal" instances of the memory aggregation (consuming space), and recommends using the keyword " java.lang.threadlocal$threadlocalmap$entry[] "for inspection. So, Mat explains the problem with a simple report.
By leak suspects problem Suspect 1 click "Details»", as shown in the context menu, select list objects, with outgoning references, view threadlocal What objects are applied.
Now see the objects referenced in threadlocal, such as the DAO object
PS: The DAO object contains a lightweight ORM relational content, so the retained size is larger。
Below continue to view the GC root of DAO as shown in the context menu, select Path to GC Roots-Exclude weak references, filter out weak references, because weak references here are not the key to the problem.
From there, you can see a reference to the Daos saved in Org.quartz.simpl.SimpleThreadPool. So it can be concluded that the timer in the process of running a large number of Daos objects should be a memory leak. Why there are so many Daos, Daos is not a stateless single-case, can be reused? Continuing to view the spring configuration file discovers that the Daos Bean is configured to scope= "prototype", causing the scheduled task to produce a new Daos instance each time the call is made. Because the Daos is stateless, it is modified as a singleton, problem solving.
The above is an analysis of the Tomcat application through the mat, finding the cause of the memory leak and resolving it.
Analyze Tomcat memory overflow at once using Eclipse Memory Analyzer