JAVA Memory leakage

Source: Internet
Author: User

A recent J2EE product has experienced memory leakage in the production environment. The following are our main steps:

1. download the gc log and find that the memory has been normal for the past year, and it is in a jagged shape, that is, if there is more memory, it will be recycled by gc and return to a lower level; by the time of January, the memory grew slowly. It is worth noting that this product has not made any changes in code, configurations, and so on since last year and.

By the beginning of March, the memory will grow rapidly, and it will take OutofMemory to run continuously for more than a week;

Fortunately, this product only needs to support 24x6 services. Restarting each weekend does not affect the service.

2. When the memory size is large, a heap dump is generated to check which objects occupy the most memory and find that the following class objects occupy a large amount of memory.

Java. awt. EventQueueItem

Find the place where this class is used in the Code and find the following code:

This is a piece of code that communicates with a third party. The basic situation is that the third party will send a lot of events for us to handle. The current log shows that about 100 events are sent in 1 s, every time an event is sent, the OnEventNotify function will be called. Then we need to call doProcess. This is an interface in the jar provided by a third party. The third-party code will be processed internally, then, an OnProcess function is implemented to process events;

// The process seems to be quite tangled. A third party is a foreign company, and this code of our company was previously written by foreigners. I have not figured out why notifications and processing should be separated, can I simply find a solution ?!

OnEventNofity (***)
{

Java. awt. EventQueue. invokeLater (new Runnable (){
Public void run (){
DoProcess ();
}
});

} There must be a problem with this Code. The Code in this invokelater can see that it constructs a java with the newly generated Runnable object. awt. event. the InvocationEvent object will be placed in the system with a default java. awt. in the EventQueue implementation object, and then let a java. awt. eventDispatchThread scheduling, and then execute the run method here one by one;

First, it is equivalent to generating a Runnable object every time an event arrives, and then generating the InvocationEvent object, which occupies a large amount of memory,

But why is there no problem before? This version has been running normally on our development server for several months;

Through testing, log analysis, and comparison in various environments, we finally found that from the log time analysis, the current event processing speed is that doProcess is too slow, the event cannot be processed, and a large number of Runnable and InvocationEvent objects are accumulated in java. awt. memory leakage occurs in EventQueue;

The app logs before January 1, September are no longer available. It cannot be seen whether there were so many events at that time,

In the development environment, because the data source connected to the development environment is also a third-party development environment, the data received is less than that in the formal production environment, another third-party processing-related time parameter configuration is different from the production server, resulting in the processing time of doProcess in the development environment faster, so that the processing speed can catch up with the speed of receiving events, no accumulation of Runnable and InvocationEvent objects or memory leakage

Here I wrote a test code to demonstrate the memory leakage process. The output I just ran on my machine is

Thu Dec 09 07:19:38 CST 2010 before memory: 2031616

Thu Dec 09 07:19:38 CST 2010 after memory: 3432448

Thu Dec 09 07:20:22 CST 2010 now system exit

As you can see, in this process, MB of memory is accumulated. Because events are constantly sent in the actual production environment, the processing speed in the memory is always low, resulting in memory accumulation, outOfMemory;

The code below is as follows:

Package com. hetaoblog. demo;
Import java. util. Date;


Public class MemoryLeakTest {

Public static int n = 1;
Static int count = 20000;
/**
* @ Param args
*/
Public static void main (String [] args ){
// TODO Auto-generated method stub

System. out. println (new Date () + "before memory:" + Runtime. getRuntime (). totalMemory ());

For (int I = 0; I <count; ++ I)
{
Java. awt. EventQueue. invokeLater (new Runnable (){

 

Public void run (){

Try {
Thread. currentThread (). sleep (1 );
} Catch (InterruptedException e ){
// TODO Auto-generated catch block
E. printStackTrace ();
}

N ++;

If (n = count)
{
System. out. println (new Date () + "now system exit ");
}

}

});
}

System. out. println (new Date () + "after memory:" + Runtime. getRuntime (). totalMemory ());

}

}

-------------------------------------------------------------------------------
I spoke about the memory leakage problem yesterday. In the following code, I used java. awt. invokeLater (New Runnable ()....,
Of course, the original intention of the author is to make asynchronous processing of the received and processed events. The original goal is to improve the performance of the situation is according to the third-party API, OnEventNotify () return as soon as possible; otherwise, the subsequent events will be blocked;

OnEventNofity (***)
{

Java. awt. EventQueue. invokeLater (new Runnable (){

Public void run (){
DoProcess ();
}
});

} Two major issues with this Code:
1. Each time in java. awt. invokeLater, a java. awt. event. InvocationEvent object is generated;
2. Here, the Runnable Anonymous class can be defined separately. Now, it is equivalent to a new Runnable Anonymous class for every event... Wasted a lot of memory

The memory leakage occurs when there are many events and doProcess takes too long, resulting in a large number of java. awt. event. InvocationEvent objects;
Therefore, the solution is actually clear,
1. Either increase the processing speed so that events do not accumulate,
2. Either change to another Asynchronous Method to avoid generating the java. awt. event. InvocationEvent object every time.

Connect to the QA environment of the other party in the development environment, so that the data volume remains consistent. After the memory leakage problem is reproduced, the following solutions run well in the development environment after analysis, debugging, and testing, the memory remains stable, gc logs and heapdump are both normal, solving the memory leakage problem:
1. modified the configuration parameters related to two third-party APIs, which increased the processing speed of doProcess. Therefore, even in the current number of events, event accumulation can still be avoided;
2. Because event processing is the core of this module, and events will inevitably be sent continuously when the program is running, modify the Code as follows:
OnEventNofity (***)
{
// Nothing will happen here
}
Then, when the program starts up, open a thread to do doProcess without stopping;
Here, onEventNotify only serves as a notification, and the real event is placed in another internal queue, so this can be done; if there is a parameter dependency, you can call other thread pools in onEventNotify.
Bytes ----------------------------------------------------------------------------------------
I wrote two articles (diagnosis, code analysis, and solutions) about java Memory leakage. When I reposted the articles to Shui Mu, I had some discussions. Here I will explain some problems;

1. the following point of view is what I agree with most. the foreigner on the other side seems to have written too many guis. I just wrote this line of code here, java. awt. eventQueue. invokeLater (new Runnable ():
Kobe2000:
How can non-GUI programs use invokexxx for asynchronous execution?
Icespace (Elastic Compute ):
Why does the server program need to put doprocess In the awt thread?
There is no GUI here.
The result may be loading a bunch of awt classes and forcing the system to call a bunch of useless GUI events.

This should be taken from the queue by a bunch of worker threads.
Don't use a bunch of new runnable instances.

Walnut blog description:
The following is the source code of java. awt. EventQueue. invokeLater in JDK.

Public static void invokeLater (Runnable runnable ){
Toolkit. getEventQueue (). postEvent (
New InvocationEvent (Toolkit. getdefatooltoolkit (), runnable ));
} Here we can see that each call, in addition to its own new Runnable object, there will be a new InvocationEvent object, which wastes a lot of memory and CPU time, constantly new things come out;
The correct method should be to create a background thread or thread pool to handle these events;

2. This is not a memory leak.

In fact, it is difficult to strictly define this issue. I basically understand that the memory will keep increasing, that is, the memory leakage;
Here, because the memory problem is related to the processing speed, it can be understood as a memory problem caused by the performance problem. If the subsequent event decreases sharply for a certain period of time, it may be processed completely, the memory is restored.
However, the reality is that the event remains stable, and the memory will grow slowly after being put into the queue; of course, this is different from the logic errors mentioned in traditional books that cannot be released again.

3. Some people proposed to persist the event to the database and then process it slowly.
This is completely unfeasible in my opinion, mainly because:
A. generally, a system that keeps sending a large number of events needs to handle these events as soon as possible; otherwise, there will be problems in the business; otherwise, there will be no need to keep sending these events; this is the case for this system;
Save the data to the database and then process the data slowly;
B. the database load is increased for no reason. As mentioned in the first article, the current data volume is about 100 events in 1 s, so that 100 insert operations are continuously performed to the database every second, then read these events from the database to handle them, increasing the load on the database too much;
C. In addition, although I have not actually verified it, I am careful to suspect that the event is saved according to events, that is, an average of 10 ms. As far as my project experience is concerned, it takes about one disk I/O operation in 10 ms to execute an insert operation. In fact, it takes almost one time to execute a disk I/O operation under 10 Gb of Oracle; when the pressure on the database is high, 10 ms is still not fixed; and the events are not completely even. When there are high and low peaks, it will become a bottleneck to store them in the database.

4. I have written two practices in the solution.
A. Either increase the processing speed so that events do not accumulate,
B. Either change to other asynchronous methods to avoid generating the java. awt. event. InvocationEvent object every time.
The following is piggestbaby's reply:

1. Either increase the processing speed so that events do not accumulate,
2. Either change to another Asynchronous Method to avoid generating the java. awt. event. InvocationEvent object every time.

1. correct direction
2. Are you sure it is not covered up by Improvement 1?

If only 1 is improved, 2 still occurs.
There may be two reasons.
A. awt event dispather is much less efficient than direct serial data processing.
B. The InvocationEvent object is larger than the internal queue of your system, indicating that the occupied memory is too large and the new object is less efficient.

Here, 1 and 2 are tested separately. So there will be no mutual protection between 1 and 2;
In fact, as long as the processing can be processed, after testing, the method of errors such as java. awt. EventQueue. invokelater will not be accumulated, and there will be no memory leakage
-----------------------------------------------------------------
As mentioned in the practice of java Memory leakage Question 2-solution, the main solution is to speed up event processing, one case is that the processing speed of code of the same version in our DEV and QA/UAT/PROD is always inconsistent, even if all the related configurations (mainly data sources) they are all the same, but there are still big differences;

Then we can see that DEV is Linux 2.6 and other environments are Linux 2.4. This is a big difference, but the difference between 2.4 and 2.6 On Our JAVA code is not clear at the beginning; later, we found that:

Wait (5) This line of code has great performance differences in Linux 2.4 and Linux 2.6;

// Of course, you can skip this step based on the analysis ..

In a certain step, the following code may be executed, that is, it will wait for a certain period of time.


Synchronized (o)

{

Try {

O. wait (5 );

} Catch (InterruptedException e ){

// TODO Auto-generated catch block

E. printStackTrace ();

}

} According to JDK instructions, this line of code indicates that if other threads call the notify () or notifyall () method of this object, wait (5) ends; if y () or policyall () is not used, the operation ends after the timeout period. In this case, the operation ends after 5 ms;

2. However, the test results show that on Linux 2.4 and Linux 2.6, this Code may produce significant performance differences;

Test method: run the following WaitTimeTest code 500 times, and then use the script to calculate the average time

Package com. hetaoblog. demo;

Public class WaitTimeTest {

/**
* @ Param args
*/
Public static void main (String [] args ){
// TODO Auto-generated method stub

For (int I = 0; I <100; ++ I)
{
Long t1 = System. currentTimeMillis ();

WaitTimeTest o = new WaitTimeTest ();

Synchronized (o)
{
Try {
O. wait (5 );
} Catch (InterruptedException e ){
// TODO Auto-generated catch block
E. printStackTrace ();
}
}

System. out. println ("wait (5) used time:" + (System. currentTimeMillis ()-t1 ));
}

}
}

Test environment:

JRE: J2RE 5.0 IBM J9 2.3 Linux x86
OS: Redhat As series
CPU: Four CPUs in the x86 Series

Test Results
Average time (MS)
2.4 14.8
2.6 6.1

In my understanding, the reason is that Linux 2.4 and 2.6 kernel scheduling algorithms are changed from o (n) to o (1), and the new thread library NPTL is used for improvement, this improves the performance by 2.6 in multi-CPU and multi-thread environments.
Http://www.ibm.com/developerworks/cn/linux/l-web26/index.html
In addition, I tested sun's jdk on my T60 and winxp. This wait (5) usually takes 16 ms;
This shows that in these three environments, the performance of multi-thread scheduling on JDK 2.6 of IBM is better than that of JDK :)

Author: ERDP Technical Architecture"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.