Before the new company, the leader said, the online production environment MySQL library will often occur during the day memory explosion was killed, the results came to this first day, the first thing is to configure the configuration based on the online server configuration, and must find the MySQL memory continues to increase the explosion of the reason, Although my main business is not a database or DBA.
This business basic mountain is OLTP, disk is very simple SQL, so performance although some SQL somewhat slow, but seen Slow-log and Performance_schema, can be ignored.
Preliminary understanding, the application is developed in Java, but the application side has not been the situation of oom, also did not see the card dead or more and more slow situation.
Starting from Monday to find this afternoon, leak MySQL with memory of almost any of the buglist can be detected by a review verified, almost can encounter a variety of possibilities have been tested, including but not limited to concat possible memory allocation leaks, Google Edition Jemalloc may cause memory growth, 2.6.32 version vm.swappiness=0 behavior changes may cause memory bug,mysql storage temporary table release may be problematic, the drop/create of a large number of tables will cause dict_ Mem Unlimited growth, almost verified the Mariadb/percona/mysql community version 5.5,5.6,5.7 each version, analysis of various MySQL memory composition of the script and verification tools, no problem ...
Tried to change the MySQL temp table from memory to InnoDB, and then to MyISAM has verified that there may be a problem, there is no obvious problem.
Having a post and mentioning that lobs may be releasing problems, double-check that the business table does not have LOB types, but it seems that there are scheduled tasks that are constantly querying mysql.proc and need to be identified with the development reason.
A post also mentions that in the 5.0 version of Client_statisitics in host more than 16 causes the/TMP space to continue to grow and memory continues to grow, we use the Percona server 5.6, after the experience is not caused by this problem: But incidentally found an operation to write a script in Python create/drop connection, the request for rectification.
Online environment basically all business writing in stored procedures, a process to apply the other five or six stored procedures, a variety of wonderful logic ... Helpless can only be done by the maintenance of the control console staring, has ruled out a variety of MySQL server itself caused problems, and verify that it is certainly impossible to InnoDB and other server-side visible memory problems, start from the application, basically connected to 50 less than, Almost half of the connections are two timed tasks in real-time more fresh air control-related values, there is a data update program to run the whole process, or basically skip, inside the use of the MySQL temporary table. No data update while running, but MySQL memory does not grow, to the disk after the data, memory every few seconds nm to ten or twenty m growth, and then grow to mysqld memory accounted for about 90% of the total OS memory, and then top res maintained at this value, Virt began to continue to grow, Until after the disk backup because Oom was killed by OS. Staring after two days, originally yesterday was intended to be in memory 90% when the application to pinch off to see if the client connection is caused by, but Operation Dimension later walked. Today after the close to let operations to restart the access machine, an instant mysqld memory consumption back to 50%. The status of the problem is finally reproducible.
Review the following application configuration and follow the framework to confirm the current C3P0.
Now all that's left is tomorrow. C3P0 connection configuration problems caused by the check, or MySQL temporary table problem (in the test environment each version of the simulation, did not reproduce the line problem, it is estimated that this problem is not caused by the large probability), if not the two problems caused by the code can only slowly debug optimization down, And then find the root cause.
。。。 Severe spat ...
MySQL is basically not a tool for viewing memory usage outside of Innodb/myisam. All kinds of online what mysqltuner.pl,my_memory, including MySQL 5.7 memory Performance_schema, a variety of blind speculation.
The only reliable estimate is valgrind, but it's impossible to attach to the online process.
Online MySQL memory continues to grow until memory overflow is killed analyzed