How-to tutorial on using mysqldump to back up Mysql data _mysql

Source: Internet
Author: User

MySQL's own mysqldump tool supports single-threaded work, exporting multiple tables in turn, without a parallel machine, which makes it impossible to quickly back up data.

Mydumper, as a utility, can support multithreaded work well, and can read data from tables in parallel and write to different files simultaneously, which makes it faster than traditional mysqldump in processing speed. One of the features is that the list needs to be locked during processing, so if we need to perform backup work during the working hours, it can cause a DML blockage. But generally now MySQL has the principal and subordinate, the backup is mostly in the from, so the lock problem can not be considered. In this way, Mydumper can better complete the backup task.

Mydumper characteristics

    • Multi-threaded backup
    • Because it is a multithreaded logical backup, multiple backup files are generated after the backup
    • FTWRL (FLUSH tables with READ LOCK) is applied to the MyISAM table during backup, blocking DML statements
    • Ensure consistency of backup data
    • Support for file compression
    • Support Export Binlog
    • Support for multi-threaded recovery
    • Supports working in daemon mode, timed snapshots and sequential binary logs
    • Support to dice backup files

Mydumper Backup mechanism

Mydumper Work Flow chart

Summary of the main steps

    • Main thread FLUSH TABLES with read lock, applying a global read-only lock to prevent DML statement writes to ensure data consistency
    • Reads the binary log file name and log write location of the current Point-in-time and records it in the metadata file for use even if the point is restored
    • N (number of threads can be specified, default is 4) dump thread START TRANSACTION with consistent SNAPSHOT; Turn on reading the same thing
    • Dump NON-INNODB tables, first export the non-things engine table
    • Main thread UNLOCK TABLES non-things engine after backup, release the global read-only lock
    • Dump InnoDB tables, export InnoDB table based on things
    • The end of Things
    • Backing up the generated files

All backup files are in one directory, and directories can be specified by themselves
The directory contains a metadata file
Records the binary log file name of the backup database at the point of Backup, the write location of the log,

If you are backing up from a library, you also record the binary log files and write locations that were synchronized to the main library at the time of the backup

Each table has two backup files:

    1. DATABASE.TABLE-SCHEMA.SQL Table Structure File
    2. Database.table.sql Table Data File

If you slice a table file, multiple backup data files are generated, and you can specify the number of rows or a specified size fragment


Install usage Instances

Assuming that there are 2 DB servers, respectively for a and B services, where a business is more important, the 1 db (Taeoss) for a business needs to be hot-prepared, with approximately 40G of data, and the DB Server for Business B as a standby, and the server is distributed as follows:
10.137.143.151     a business
10.137.143.152     b business
 
Suppose the requirements to be met are: The
does not have an impact on a business when exporting db (Taeoss) for a business. At the same time in the B business DB Server recovery, also can not have a greater impact, as far as possible within 1 minutes.
 
:
1, mysqldump: A logical backup, there will be a lock table, but considering the large amount of data, the time to lock the table will be longer, the business does not allow, pass away;
2, Xtrabackup: A physical backup, There is no lock table, but given that all 2 DB uses shared table space, at the same time in the Business B database recovery, one is a long time, the second is that the data is certainly incorrect, pass off (tested);
3, Mydumper: Logical Backup, is a multithreaded, high-performance data logical Backup, Recovery tools, and the time to lock the table is very short (40G data, within 10 minutes), while recording Binlog file and POS, business can be accepted.
&NBSP
Mydumper mainly has the following characteristics:
(1), task speed is 6 times times faster than Mysqldump,
(2), transactional and non-transactional table consistent snapshots (for 0.2.2 version above);
(3), Fast file compression,
(4), support for export Binlog
(5), multithreading Recovery (for 0.2.1 versions above),
(6), the way daemons work, timed snapshots, and sequential binary logs (for versions above 0.5.0).
 
Mydumper Installation:
https://launchpad.net/mydumper/0.6/0.6.2/+download/mydumper-0.6.2.tar.gz

# yum Install glib2-devel mysql-devel zlib-devel pcre-devel #
tar zxvf mydumper-0.6.2.tar.gz
# CD mydumper-0.6.2< c3/># CMake.
# make
# make install


The parameters are as follows:

Because DB is deployed on the older SuSE Linux 10 server, the installation of Mydumper rely on more than cubby, will be more cumbersome, while using local backup, it will also occupy a lot of disk I/O, so we chose the same network segment of another CentOS 6.4 (10.137.143.156) server for backup.

The steps are as follows:
1. Interim authorization of "10.137.143.156" on "10.137.143.151, 10.137.143.152"

# MYSQL-UROOT-E "Grant all privileges on *.* to ' backup ' @ ' 10.137.143.156 ' identified by ' backup2015 ';"
# MYSQL-UROOT-E "flush privileges;"


2, the "10.137.143.151" on the "10.137.143.156" of the DB (Taeoss) backup

# mydumper-h 10.137.143.151-u backup-p backup2015-b taeoss-t 8-o/data/rocketzhang


3, restore the backup data to "10.137.143.152"

# myloader-h 10.137.143.152-u backup-p backup2015-b taeoss-t 8-o-d/data/rocketzhang


4, master-slave relationship Establishment: 10.137.143.151 (Main), 10.137.143.152 (from)
Establish an authorized account in "10.137.143.151":

# mysql-uroot-e "Grant replication Slave on *.* to ' repl ' @ ' 10.137.143.152 ' identified by ' repl123456 ';"
# MYSQL-UROOT-E "flush privileges;"
 

View the Binlog information under the record in "10.137.143.156":

In "10.137.143.152", follow these steps:

# vim/etc/my.cnf ...
replicate-do-table = taeoss.%
replicate-wild-do-table = taeoss.%
...
 
# service Mysqld Reload
 
# mysql-uroot-e ' Change master to master_host= ' 10.137.143.151 ', master_user= ' repl ', master_ Password= ' repl123456 ', master_log_file= ' mysql-bin.002205 ', master_log_pos=456584891;
# mysql-uroot-e "start slave;"
 
# mysql-uroot-e "show slave status\g;"

The following message appears:

There appears to be a primary key conflict that causes master-slave replication to fail.

Problem Analysis:
Execute on Primary db (10.137.143.151):

# Mysqlbinlog--no-defaults-v-v--base64-output=decode-rows mysql-bin.002205 > Mysql-bin.002205.txt
# grep-c 8 529864938 Mysql-bin.002205.txt

Presumably, a primary key conflict occurs when an insert operation on the T_evil_detect_uin_blacklist table exists on the primary db, and a primary key conflict occurs when synchronizing from the end, causing the master-slave synchronization to fail.

A temporary solution:
Export a table from the end taeoss.t_evil_detect_uin_blacklist

# mysqldump-uroot--opt taeoss t_evil_detect_uin_blacklist > TaeOss.t_evil_detect_uin_blacklist.sql


Remove the primary key statement from the TaeOss.t_evil_detect_uin_blacklist.sql:

And then import:

# Mysql-uroot Taeoss < TaeOss.t_evil_detect_uin_blacklist.sql
 
# mysql-uroot-e "Stop slave;"
# mysql-uroot-e "start slave;"
# mysql-uroot-e "show slave status\g;"

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.