Mydumper Backup MySQL Database example

Source: Internet
Author: User
Tags unix domain socket

Mydumper is a lightweight third-party open source tool for MySQL database backup, backup way term logical backup. It supports multi-threading, and backups are much faster than the mysqldump of the original ecosystem and many of its outstanding features. So this tool is the choice for DBAs. This article mainly describes how to use the tool and gives an example.

For mydumper Installation reference: Mydumper installation and installation failure summary


1, the characteristics of Mydumper
A, multi-threaded logical backup, production of multiple backup files
b, same as mysqldump, applies ftwrl (FLUSH TABLES with READ LOCK) to the MyISAM table when backing up, blocking DML statements
C, ensure the consistency of backup data
D, support file compression, support export Binlog, support multi-threaded recovery, support to cut backup files
E, support for working in daemon mode, timed snapshot and continuous binary log


2. Mydumper Syntax
Mydumper-u [USER]-P [PASSWORD]-H [HOST]-P [PORT]-T [THREADS]-b-c-b [DB]-o [directory]
Parameter description
-B,--database the library that needs to be backed up
-T,--tables-list tables that need to be backed up, separated by commas
-O,--outputdir directory of output files
-S,--statement-size the number of bytes to generate the INSERT statement, default 1000000
-R,--rows splits into a lot of block tables.
-C,--compress compressed output file
-E,--build-empty-files even if the table has no data, it produces an empty file
-X,--regex Regular expression: ' db.table '
-I,--ignore-engines ignores the storage engine, separated by commas
-M,--no-schemas do not export table structure
-K,--no-locks does not perform a shared read lock warning: This results in inconsistent backups
-L,--long-query-guard set long query time, default 60 seconds
--kill-long-queries Kill a long time executed query
-B,--binlogs export Binlog
-D,--daemon enable daemon mode
-I,--snapshot-interval dump snapshot interval, default 60s, required in daemon mode
-L,--logfile log file
-H,--host the host to connect to
-U,--user Username with privileges to run the dump
-P,--password User password
-P,--port TCP/IP port to connect to
-S,--socket UNIX domain socket file to use for connection
-T, number of threads used by--threads, default 4
-C,--compress-protocol using the compression protocol on MySQL connection
-V,--version Show the program version and exit
-V,--verbose more output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2


3. The working process of mydumper
A, connect the target database;
b, through show processlist to determine whether there is long query, according to parameters Long-query-guard and Kill-long-queries decided to quit or kill long query;
C, lock MyISAM table, flush tables with read lock, open transaction for InnoDB table, start transaction;
D, create worker sub-thread, default is 4;
E, to determine the candidate table, according to the category inserted innodb_table,non_innodb_table and Table_schemas linked list (table structure);
F, join the candidate table through G_async_queue_push to the task queue (the last element of the queue is thread shutdown), the worker sub-thread reads the table information from the queue and performs the data export
G, the implementation of unlock tables, after processing the MyISAM table immediately after unlocking, to reduce the locking time;
H, wait for the worker to exit;


4. backup file related information
A, all backup files are in one directory, not specified as the current directory, and the backup Date Time folder is automatically generated, such as export-20150703-145806
B. The directory contains a metadata file that records the binary log file name of the backup point in time, where the log is written
C, if you are backing up from the library, you will also record the binary log files and write locations that were synchronized to the main library when the backup was made
D, each table has two backup files: Database.table-schema.sql table structure file, database.table.sql table data file
E, if the table file is fragmented, multiple backup data files will be generated, you can specify the number of rows or specify the size of the Shard


5. Common Backup Example:
# # #备份单个库
# mydumper-u leshami-p pwd-b sakila-o/tmp/bak

# # #备份所有数据库, all libraries except INFORMATION_SCHEMA and Performance_schema will be backed up during a full-library backup
# mydumper-u Leshami-p Pwd-o/tmp/bak

# # #备份单表
# mydumper-u leshami-p pwd-b sakila-t actor-o/tmp/bak

# # #备份多表
# mydumper-u leshami-p pwd-b sakila-t actor,city-o/tmp/bak

# # #当前目录自动生成备份日期时间文件夹, when you do not specify the-o parameter and value, such as folder: export-20150703-145806
Mydumper-u leshami-p pwd-b sakila-t actor

# # #不带表结构备份表
# mydumper-u leshami-p pwd-b sakila-t actor-m

# # #压缩备份及连接使用压缩协议 (when not locally backed up)
# mydumper-u Leshami-p pwd-b sakila-o/tmp/bak-c-C

# # #备份特定表
# mydumper-u leshami-p pwd-b Sakila--regex=actor*-o/tmp/bak

# # #过滤特定库, if not back up MySQL and test library
# mydumper-u leshami-p pwd-b sakila--regex ' ^ (?! (mysql|test)) ' -o/tmp/bak

# # #基于空表产生表结构文件
# mydumper-u leshami-p pwd-b sakila-t actor-e-o/tmp/bak

# #设置长查询的上限, if there is a longer query than this will exit Mydumper, you can also set to kill the long query
#mydumper-u leshami-p pwd-b sakila--long-query-guard--kill-long-queries

# # #备份时输出详细日志
# mydumper-u leshami-p pwd-b sakila-t actor-v 3-o/tmp/bak

# # #导出binlog, using the-B parameter will automatically generate the Binlog_snapshot folder and Binlog in the export directory
# mydumper-u leshami-p pwd-p 3306-b-o/tmp/bak

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Mydumper Backup MySQL Database example

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.