Mysqldump Error in Backup table with large field failure

Source: Internet
Author: User

A few days ago received a business project, MySQL database logical backup mysqldump backup failed mail, this is on vacation, but in the spirit of serious and responsible work, 7x24 hours of uninterrupted operation of the noble professional sentiment, began the DBA of the wrong way (start database backup is successful, The coincidence is that I have a vacation on the problem, suspicion is the data volume and growth.

First we understand a process of mysqldump backup, Data flow: The MySQL server retrieves data from the data file, and then returns the data to the Mysqldump client in batches, and then Mysqldump writes the data to NFS. In general, storage is not SSD or ordinary disk, then write data to NFS than the server end of the data sent to the mysqldump client is much slower, it is possible mysqldump unable to receive the MySQL server sent data in time, Causes the server side to retrieve the data in memory backlog waiting to be sent. The connection disconnects when the waiting time Net_write_timeout (default 60s) is exceeded, and an error is thrown.

1, positioning problems

Log on to the machine, look at the logic of the backup file, check the backup log and backup file size, confirm that the backup failed and locate the backup command mysqldump half of the row execution failure (based on the backup file less than a few days before the script run log to determine). The early morning backup is invalid, so manually triggering the script to perform a backup, found the following error:

[Root@mysql_query hk_sa]# Bash/opt/shells/mysqldump. SH      2821866

2. Troubleshoot problems

View the number of rows in the table for which the backup failed is 4982704, to view the line information at the point where the manual backup failed was written in 2017-02-05 04:03:18, before the failure of this backup has occurred. Then began to doubt whether the recent data growth is too large or the table field is too wide (the other database table is larger, some even up to 400G did not have this problem, the table data is too large probability, but the single-row backup failed, the question of big number of questions)

View the table structure as follows:

[root@localhost] | 08:42:21 | [heika0516] > Desc rrd_api_log;+---------------+-------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+---------------+-------------+------+-----+---------+----------------+| ID | bigint (20) | NO | PRI | NULL | auto_increment | | Api_command | varchar (30) |     NO | |                NULL | || Request_info | Text |     NO | |                NULL | || Response_info | Text |     NO | |                NULL | || Create_time | datetime |     NO | |                NULL | |+---------------+-------------+------+-----+---------+----------------+5 rows in Set (0.01 sec) 

The direction of the rest is to verify their own conjecture, so I began to look for information, I am Baidu to some valuable things, including some of the official MySQL saying: bugs.mysql.com/bug.php?id=46103. The approximate meaning is that the old version of the problem will appear, you can adjust the two parameters net_write_timeout or max_allowed_packet the value of a bit larger. Net_write_timeout default value is 60s, and the default value of Max_allowed_packet is 67108864, but also a check data found that some predecessors adjust max_allowed_packet invalid, and adjust Net_write_ Timeout is effective, so I also stood on the shoulders of giants to make a piece, sure enough backup success, survived the vacation was bothered by the problem.

However, new problems come out again, many seniors talk about increasing the value of Net_write_timeout, server side will consume more memory and even lead to the use of swap impact performance, but not sure whether the parameter adjustment, there is a potential risk. However, I do not agree with this statement, because the process of my implementation found that the MEM free has become more, you have not heard the wrong really much.

 #mysqldump备份执行前 [root@mysql_query hk_sa]# free-m Total used F Ree gkfx buffers cachedmem:16080 13305 2775 0 121 3729-/+ buffers/c ache:9454 6626swap:8191 349 7842# increases the value of the Net_write_timeout [root@localhost] | 08:51:53 | [(None)] > SET @ @global. net_write_timeout=500; Query OK, 0 rows affected (0.01 sec) #bash完脚本发现备份OK的 [root@mysql_query hk_sa]# Ls-lh/opt/app/mysql/data/heika0516/rrd_ api_log.ibd-rw-r--r--1 mysql MySQL 4.1G 7 22:03/opt/app/mysql/data/heika0516/rrd_api_log.ibd#mysqldump backup after execution [ Root@mysql_query hk_sa]# free-m Total used free shared buffers cachedmem:1608        0 12434 3646 0 2890-/+ buffers/cache:9450 6630swap:8191 349 7842 

So far, mysqldump backup failure is really a solution, but the problem that the predecessors reflect is to consume more memory, to me this instead frees up more memory, so the modification parameters will eventually have a security risk, and this parameter will affect all the session connection. Then let it go, and take a good holiday. But I always have a question, I this maintenance of the large table of 400G, and some of the table is larger than this, there is a large number of paragraphs did not appear this problem, how can suddenly appear, so I suspect that is not the table of data too much problem, or large-scale paragraph of the problem.

After the vacation back, immediately began to test the wrong work, first analysis confirm good entry point, I will change the value of Net_write_timeout to the default 60, but so the backup will certainly fail, so think of the Max_allowed_packet parameter, but the global adjustment of this parameter, The network package and the various sessions also have an impact, continue to mysqldump this backup to investigate, incredibly let me find a cool can connect parameter max_allowed_packet, in mysqldump after added this option, value size we can set according to the size of the table, I gave the 500M here, so the problem is solved completely, and there is no change in the global value of the parameter affecting the session.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.