Recently, an application used Amoeba to read tables containing longblob fields from MySQL from several databases. Session was killed is always reported.
The result is the cache size of the readable channel in Amoeba. In the com.meidusa.amoeba.net. io. PacketInputStream class, maxPacketSize restricts the cache size of the readable channel. If the length of the currently read record is greater than maxPacketSize, an error is reported. So we must increase it.
Java code
/** Maximum capacity */
Protected static final int MAX_BUFFER_CAPACITY = 1024*1024*2;
Private int maxPacketSize = MAX_BUFFER_CAPACITY;
Public int getMaxPacketSize (){
Return maxPacketSize;
}
Public void setMaxPacketSize (int maxPacketSize ){
This. maxPacketSize = maxPacketSize;
}
The default value of maxPacketSize is 2 MB. However, the setMaxPacketSize method is not called elsewhere and cannot be configured. Therefore, I directly modified MAX_BUFFER_CAPACITY and threw it into the package again.
In addition, when SQLYog is used to connect to the proxy when maxPacketSize is insufficient, an interesting phenomenon is found. If I have a table, the length of each field is as follows:
Field1 | field2
1 M | 3 M
If I first execute: SELECT field2 FROM tab
Error: Lost connection to MySQL server during query
Run: SELECT field1 FROM tab
SQLYog does not respond, And amoeba throws an OOM exception, so it is difficult to try again and again. Check the DUMP file and find that the DailyRollingFileAppender of AuthingableConnectionManager and Log4j is full of memory.