How to Use mysqldump to back up MySQL Data,

Source: Internet
Author: User
Tags mysql import

How to Use mysqldump to back up MySQL Data,

MySQL's mysqldump tool supports single-threaded operations. Multiple tables are exported one by one without a parallel machine, which makes it unable to quickly back up data.

Mydumper is a utility that supports multithreading. It can read data from tables in parallel and write data to different files at the same time, this makes it faster than the traditional mysqldump in terms of processing speed. One of its features is that the list needs to be locked during processing. Therefore, if we need to perform backup during working hours, it will cause DML blocking. However, MySQL usually has master and slave databases, and most of the backups are performed from the master database. Therefore, you do not need to consider the lock issue. In this way, mydumper can better complete the backup task.

Mydumper features

  • Multi-thread backup
  • Because it is a multi-threaded logical backup, multiple backup files will be generated after the backup
  • Applying FTWRL (flush tables with read lock) to the MyISAM table during Backup will block the DML statement
  • Ensure consistency of backup data
  • Supports file Compression
  • Binlog export is supported.
  • Multi-thread recovery
  • Support Working in daemon mode, timed snapshots, and continuous binary logs
  • Supports chunking backup files

Mydumper backup mechanism

Mydumper Workflow

General steps

  • The main thread flush tables with read lock applies a global READ-only LOCK to prevent DML statement writing and ensure data consistency.
  • Reads the binary log file name and the location where the log is written at the current time point and records it in the metadata file for recovery even if the point is
  • N (the number of threads can be specified, the default value is 4) dump thread start transaction with consistent snapshot; Enable Read consistency
  • Dump non-InnoDB tables, first export the table of the non-transaction Engine
  • The global read-only lock is released after the backup of the main thread unlock tables non-transaction engine.
  • Dump InnoDB tables to export InnoDB tables based on transactions
  • Transaction end
  • Backup file generated

All the backup files are in one directory, which can be specified by yourself
The directory contains a metadata file.
Records the binary log file name and log write location of the backup database at the backup time point,

If you back up data from the slave database, it also records the binary log files synchronized to the master database and the write location.

Each table has two backup files:

  1. Database. table-schema. SQL table structure file
  2. Database. table. SQL table data file

If you partition a table file, multiple backup data files are generated. You can specify the number of rows or the size of the partitions.


Install and use instances

Assume that two existing DB servers are used for business A and business B respectively. Business A is important, and one DB (TaeOss) of business A needs to be hot-standby, there is about 40 GB of data, and the DB server of business B is used as the backup machine. The server distribution is as follows:
10.137.143.151 A business
10.137.143.152 B business
 
Assume that the following requirements are met:
When exporting A service's DB (TaeOss), it cannot affect A service. At the same time, recovery on the DB server of business B does not have a major impact, and it should be controlled within 1 minute as much as possible.
 
Solution:
1. mysqldump: it is a logical backup and has a lock table. However, considering the large data volume, the lock table takes a long time and the service is not allowed;
2. xtrabackup: physical backup, no lock table exists. However, considering that the two databases use shared tablespace, it takes a long time to restore the database of business B, second, the data is certainly incorrect. pass (tested );
3. mydumper: Logical backup. It is a multi-threaded, high-performance data logical backup and recovery tool, and it takes a short time to lock the table (40 GB of data, within 10 minutes ), binlog file and pos are recorded at the same time, which is acceptable to the business.
 
Mydumper has the following features:
(1) The task speed is more than 6 times faster than mysqldump;
(2) Consistent snapshots of transactional and non-transactional tables (applicable to versions 0.2.2 and later );
(3) Fast File compression;
(4) binlog export is supported;
(5) multi-thread recovery (applicable to versions 0.2.1 and later );
(6) working with daemon, scheduled snapshots, and continuous binary logs (applicable to versions 0.5.0 and later ).
 
Install mydumper:
Https://launchpad.net/mydumper/0.6/0.6.2/+download/mydumper-0.6.2.tar.gz

# yum install glib2-devel mysql-devel zlib-devel pcre-devel# tar zxvf mydumper-0.6.2.tar.gz# cd mydumper-0.6.2# cmake .# make# make install

 
The parameters are as follows:

Because the database is deployed on an older SuSE Linux 10 server, installing mydumper relies on a large number of libraries, which is cumbersome. If you use local backup, it also occupies a large amount of disk I/O, so we chose another centos 6.4 (10.137.143.156) server in the same network segment for backup.
 
The procedure is as follows:
1. Grant "10.137.143.151, 10.137.143.152" temporary authorization to "10.137.143.156"

# mysql -uroot -e "grant all privileges on *.* to 'backup'@'10.137.143.156' identified by 'backup2015';"# mysql -uroot -e "flush privileges;"

 
2. Back up the database (TaeOss) of "10.137.143.151" on "10.137.143.156"

# mydumper -h 10.137.143.151 -u backup -p backup2015 -B TaeOss -t 8 -o /data/rocketzhang

 
3. Restore the backup data to "10.137.143.152"

# myloader -h 10.137.143.152 -u backup -p backup2015 -B TaeOss -t 8 -o -d /data/rocketzhang

 
4. Establish a master-slave relationship: 10.137.143.151 (master) and 10.137.143.152 (slave)
Create an authorized account in "10.137.143.151:

# mysql -uroot -e "grant replication slave on *.* to 'repl'@'10.137.143.152' identified by 'repl123456';"# mysql -uroot -e "flush privileges;" 

View the binlog information under the record in "10.137.143.156:

Perform the following operations on "10.137.143.152:

# vim /etc/my.cnf……replicate-do-table = TaeOss.%replicate-wild-do-table = TaeOss.%…… # service mysqld reload # mysql -uroot -e "change master to master_host='10.137.143.151',master_user='repl',master_password='repl123456',master_log_file='mysql-bin.002205',master_log_pos=456584891;"# mysql -uroot -e "start slave;" # mysql -uroot -e "show slave status\G;"

The following information is displayed:

It seems that there is a primary key conflict, resulting in master-slave replication failure.
 
Problem Analysis:
Run the following command on the primary DB (10.137.143.151:

# mysqlbinlog --no-defaults -v -v --base64-output=DECODE-ROWS mysql-bin.002205 > mysql-bin.002205.txt# grep -C 8 529864938 mysql-bin.002205.txt

In general, when an insert operation is performed on the t_evil_detect_uin_blacklist table in the primary DB, a primary key conflict occurs. When synchronization is performed on the slave, a primary key conflict occurs, as a result, the master-slave synchronization fails.
 
Temporary solution:
Export the table TaeOss. t_evil_detect_uin_blacklist

# mysqldump -uroot --opt TaeOss t_evil_detect_uin_blacklist > TaeOss.t_evil_detect_uin_blacklist.sql

 
Remove the primary key statement in TaeOss. t_evil_detect_uin_blacklist. SQL:

Then import:

# mysql -uroot TaeOss < TaeOss.t_evil_detect_uin_blacklist.sql # mysql -uroot -e "stop slave;"# mysql -uroot -e "start slave;"# mysql -uroot -e "show slave status\G;"

Articles you may be interested in:
  • How to Use mysqldump to back up and restore a specified table
  • Mysql import and export tool Mysqldump and Source command usage details
  • Mysql mysqldump: How to export only the table structure or data

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.