MySQL OOM troubleshooting 1

Source: Internet
Author: User
Tags server memory

MySQL OOM troubleshooting 1

The mysql database of a game business is hit by OOM on multiple servers, but what is the cause of OOM? In fact, the direct cause of OOM is not complicated. It is because the server memory is insufficient and the kernel needs to recycle the memory. The memory to be recycled is kill the program with the most memory used on the server, mysql uses the most memory, so it is out of memory (OOM.

First, check the cause of memory insufficiency. The physical memory of this server is 64 GB.
# Free-m
Total used free shared buffers cached
Mem: 64375 57821 6553 0 554 16369
-/+ Buffers/cache: 40897 23478
Swap: 16383 5 16378

The reason for the first suspicion of oom is whether the innodb_buffer_pool_size setting is reasonable.
141205 18:47:57 [Note] Plugin 'federated 'is disabled.
141205 18:47:57 InnoDB: The InnoDB memory heap is disabled
141205 18:47:57 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141205 18:47:57 InnoDB: Compressed tables use zlib 1.2.3
141205 18:47:57 InnoDB: Using Linux native AIO
141205 18:47:57 InnoDB: Initializing buffer pool, size = 58.0G
141205 18:48:04 InnoDB: Completed initialization of buffer pool
141205 18:48:04 InnoDB: highest supported file format is Barracuda.
From the mysql startup log above, we can see that when mysql was started in 14 years, innodb_buffer_pool_size was set to 58 GB until OOM and innodb_buffer_pool_size were always 58 GB.
The setting of innodb_buffer_pool_size is indeed a bit large for a server with 64 GB physical memory.

Another check is vm. swappiness settings.
Now the vm. swappiness on the server is set to 0, and the behavior of swappiness is modified after the RHEL/CentOS 6.4 kernel is 2.6.32-358,
For the latest kernel, we recommend that you set vm. swappiness to 1.
# Uname-r
2.6.32-431. el6.x86 _ 64
Here our system is the latest kernel


However, the following analysis shows that the unreasonable settings of innodb_buffer_pool_size and vm. swappiness are only potential problems in OOM, not the direct cause of OOM triggering.


You can use show engine innodb status to view mysql memory usage.
.....
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 63979913216; in additional pool allocated 0
Internal hash tables (constant factor + variable factor)
Adaptive hash index 7893457408 (7887918656 + 5538752)
Page hash 61625224 (buffer pool 0 only)
Dictionary cache 247241428 (246498992 + 742436)
File system 111712 (82672 + 29040)
Lock system 154077792 (154061624 + 16168)
Recovery system 0 (0 + 0)
Dictionary memory allocated 742436
Buffer pool size 3801087
Buffer pool size, bytes 62277009408 -- Buffer Pool size
Free buffers 3686413 -- number of blank pages in the Buffer Pool
Database pages 112368 -- how many pages are used in the Buffer Pool?
Old database pages 22365
........
Here we can find that the majority of pages in the Buffer pool are Free. In fact, the mysql service is not directly allocated to the 58G memory.


A large amount of SWAP with SWAP memory is found through sar-B, and this frequent SWAP occurs every half hour.
10:00:01 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s % vmeff
10:10:01 AM 24472.34 13779.62 10542.81 0.88 8986.53 1547.25 42.00 1556.45 97.94
10:20:01 AM 24.55 11989.73 10669.42 0.14 4369.46 138.54 0.11 138.64 100.00
10:30:01 AM 1.29 8991.32 9832.53 0.01 3596.49 20.53 0.55 21.08 100.00
10:40:01 AM 24522.56 17124.60 11295.04 0.75 9649.33 1710.20 51.21 1666.83 94.63
10:50:01 AM 4.68 8691.01 10355.18 0.08 3934.04 21.22 0.11 21.33 100.00
11:00:01 AM 3.71 10738.47 10384.87 0.09 4017.96 94.24 0.28 94.47 99.96
11:10:02 AM 24924.74 14603.99 11129.07 0.86 9504.08 1751.06 40.43 1642.30 91.67
11:20:01 AM 11.95 8548.50 10206.84 0.11 3819.67 48.74 0.11 48.85 100.00
Average: 8944.00 11658.11 10479.06 0.40 5958.60 687.73 18.04 668.62 94.74

What is the cause of SWAP memory SWAP every half an hour?
[Root @ tlbb3d_yd_mxy_120-132-43-203_sx ~] # Crontab-l
00 05 ***/bin/bash/home/databak/scripts/xtrabackup. sh FULL_BACKUP
*/30 */bin/bash/home/databak/scripts/xtrabackup. sh INCRE_BACKUP
00 03 ***/bin/bash/home/databak/dbbbak. sh xxx

Crontab shows that this server uses xtrabackup to perform an incremental backup for mysql every half an hour,
The starting point of this Incremental backup is based on the full backup at every day, which means that the data volume increases with the increase of time,
The backup duration will also grow, which means that the system resources will become increasingly tight. The following shows the data growth
# Du-sh *
2.4G 201602261030
2.5G 201602261100

Obviously, through the above analysis, we need to do the following three things to solve the OOM problem of mysql.
1. Set innodb_buffer_pool_size to 48 gb.
2. Set vm. swappiness to 1.
3. Adjust the Incremental backup time (which must be balanced with the Service), prolong the Incremental Backup interval, and reduce system resource consumption.

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.