Backup Mysql database using Mydumper multithreading _mysql

Source: Internet
Author: User
Tags time interval unix domain socket

Mysqldump: One of the features is that the list needs to be locked during processing, so if we need to perform a backup work during the working hours, it can cause DML blocking. But generally now MySQL has the principal and subordinate, the backup is mostly in the from, so the lock problem can not be considered. In this way, Mydumper can better complete the backup task.
Mydumper Main Features: A high-performance multi-threaded backup and Recovery tool for MySQL and drizzle, with developers mainly from mysql,facebook,skysql companies.

Copy Code code as follows:

1: Lightweight C-language written
2: Execution speed is 10 times times faster than mysqldump
3: Consistent snapshot of transactional and non-transactional tables (for 0.2.2 version above)
4: Fast File compression
5: Support Export Binlog
6: Multithreading recovery (applicable to 0.2.1 above version)
7: In the way the daemon works, timed snapshots and sequential binary logs (applicable to 0.5.0 above version)
8: Open Source (GNU GPLv3)

Download installation: Environment: Ubuntu 12.04

Copy Code code as follows:

wget https://launchpad.net/mydumper/0.5/0.5.2/+download/mydumper-0.5.2.tar.gz

Installation: After decompression, there are installation instructions

Copy Code code as follows:

Apt-get Install Libglib2.0-dev Libmysqlclient15-dev zlib1g-dev Libpcre3-dev
CMake.
Make

Copy Code code as follows:

root@dd:~/mydumper-0.5.2# CMake.
--The CXX compiler identification is GNU
--Check for working CXX compiler:/usr/bin/c++
--Check for working CXX compiler:/usr/bin/c++--Works
--Detecting CXX compiler ABI Info
--Detecting CXX compiler ABI Info-done
--Using Mysql-config:/usr/bin/mysql_config
--Found MySQL:/usr/include/mysql,/usr/lib/x86_64-linux-gnu/libmysqlclient_r.so;/usr/lib/x86_64-linux-gnu/ libpthread.so;/usr/lib/x86_64-linux-gnu/libz.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/ Librt.so;/usr/lib/x86_64-linux-gnu/libdl.so

CMake Warning at Docs/cmakelists.txt:9 (message):
Unable to find Sphinx documentation generator


-- ------------------------------------------------
--Mysql_config =/usr/bin/mysql_config
--Cmake_install_prefix =/usr/local
--Build_docs = On
--Run_cppcheck = Off
--Change a values with:cmake-d<variable>=<value>
-- ------------------------------------------------
--
--Configuring done
--Generating done
--Build the files have been written to:/root/mydumper-0.5.2
root@dd:~/mydumper-0.5.2# make
Scanning dependencies of Target Mydumper
[20%] Building C Object CMAKEFILES/MYDUMPER.DIR/MYDUMPER.C.O
[40%] Building C Object CMAKEFILES/MYDUMPER.DIR/BINLOG.C.O
[60%] Building C Object CMAKEFILES/MYDUMPER.DIR/SERVER_DETECT.C.O
[80%] Building C Object CMAKEFILES/MYDUMPER.DIR/G_UNIX_SIGNAL.C.O
Linking C executable Mydumper
[80%] Built Target Mydumper
Scanning dependencies of Target Myloader
[100%] Building C Object CMAKEFILES/MYLOADER.DIR/MYLOADER.C.O
Linking C executable Myloader
[100%] Built Target Myloader

Generate 2 Tools: Mydumper (Backup), Myloader (import)

Parameters:
Mydumper:

Copy Code code as follows:

root@dd:~/mydumper-0.5.2#./mydumper--help
Usage:
Mydumper [OPTION ...] multi-threaded MySQL dumping

Help Options:
-?,--Help Show Help options

Application Options:
-B,--database libraries that need to be backed up
-T,--tables-list tables that need to be backed up, separated by commas
-O,--outputdir output file directory
-S,--statement-size generates the number of bytes in the INSERT statement, default 1000000
-R,--rows split into many row block tables
-C,--compress compressed output file
-E,--build-empty-files generates an empty file even if the table has no data
-X,--regex Regular expression: ' db.table '
-I,--ignore-engines ignored storage engine, separated by commas
-M,--no-schemas does not export table structure
-K,--no-locks does not perform shared read lock warnings: This will result in inconsistent backups
-L,--long-query-guard set long query time, default 60 seconds
--kill-long-queries kill off a long run of queries
-B,--binlogs export Binlog
-D,--daemon enable daemon mode
-I,--snapshot-interval dump snapshot time interval, default 60s, required in daemon mode
-L,--logfile log file
-H,--host the host to connect to
-U,--user Username with privileges to run the dump
-P,--password User password
-P,--port TCP/IP port to connect to
-S,--socket UNIX domain socket file to connection
-T, the number of threads used by--threads, default 4
-C,--compress-protocol using the compression protocol on the MySQL connection
-V,--version show the program version and exit
-V,--verbose more output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2

Myloader:

Copy Code code as follows:

root@dd:~/mydumper-0.5.2#./myloader--help
Usage:
Myloader [OPTION ...] multi-threaded MySQL loader

Help Options:
-?,--Help Show Help options

Application Options:
-D,--directory the directory where the backup files are located
-Q,--queries-per-transaction the number of queries per execution, default 1000
-O,--overwrite-tables if the table exists, delete it first. Note Here, use this parameter, need to back up the table structure, or there will be problems
-B,--database the database that needs to be restored
-E,--enable-binlog enable binary recovery data
-H,--host the host to connect to
-U,--user Username with privileges to run the dump
-P,--password User password
-P,--port TCP/IP port to connect to
-S,--socket UNIX domain socket file to connection
-T, number of threads used by--threads, default 4
-C, compression protocol used on--compress-protocol connection
-V,--version show the program version and exit
-V,--verbose more output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2

Testing: Testing Basic usage

1: Backup

Copy Code code as follows:

./mydumper-u zjy-p #####-H 192.168.220.245-p 3306-b Chushihua-o/home/zhoujy/bak/

Back up the ANALYZEDXY database to the/home/zhoujy/bak/directory to see if Multithreading:

Copy Code code as follows:

| 4937639 | Zjy | 192.168.200.25:34781 | null|      Query | 0 | NULL | Show Processlist |
| 4937677 | Zjy | 192.168.200.25:34791 | null|     Query | 10 | Writing to Net | SELECT/*!40001 Sql_no_cache */|
| 4937678 | Zjy | 192.168.200.25:34792 | null|      Query | 5 | Writing to Net | SELECT/*!40001 Sql_no_cache * |
| 4937679 | Zjy | 192.168.200.25:34793 | null|     Query | 10 | Writing to Net | SELECT/*!40001 Sql_no_cache * |
| 4937680 | Zjy | 192.168.200.25:34794 | null|     Query | 10 | Writing to Net | SELECT/*!40001 Sql_no_cache * |

It shows that there are 4 threads (default) in the backup, view backup files:

Copy Code code as follows:

root@zhoujy:/home/zhoujy/bak# LS-LH
-rw-r--r--1 root root 322 2013-11-14 17:59 Chushihua.dba_hospital_all_name-schema.sql
-rw-r--r--1 root root 16M 2013-11-14 17:59 chushihua.dba_hospital_all_name.sql
-rw-r--r--1 root root 221 2013-11-14 17:59 Chushihua.dba_hospital-schema.sql
-rw-r--r--1 root root 658 2013-11-14 17:59 chushihua.dba_hospital.sql
-rw-r--r--1 root root 198 2013-11-14 17:59 Chushihua.dba_jobtitle-schema.sql
-rw-r--r--1 root 2013-11-14 17:59 chushihua.dba_jobTitle.sql
-rw-r--r--1 root root 261 2013-11-14 17:59 chushihua.dba_locatedcity-schema.sql
-rw-r--r--1 root root 202K 2013-11-14 17:59 chushihua.dba_locatedCity.sql

Analysis: mydumper data and table structure backup separately, and the binary log backup out of a separate file.

Copy Code code as follows:

Metadata: The metadata records the start and end times of the backup and the Binlog log file location.
Table data: one file per table
Table schemas: Tables Structure file
Binary logs: After the--binlogs option is enabled, the binaries are stored in the Binlog_snapshot directory
Daemon mode: In this mode, there are five directory 0,1,binlogs,binlog_snapshot,last_dump.
Backup directories are 0 and 1, interval backups, and if Mydumper fails for some reason, there is still a good snapshot, and when the snapshot is complete, last_dump points to the backup.

2: Restore: Restore to another server, first set up the database to restore (Chushihua)

Copy Code code as follows:

./myloader-u root-p 123456-h 192.168.200.25-p 3307-b chushihua-d/home/zhoujy/bak/

See whether multithreading is the same as backup:

Copy Code code as follows:

| 19 |      Root | | NULL |     Query | 0 | init | Show processlist|
| 30 |      Root | | Chushihua |     Query | 5 | update| INSERT into |
| 31 |      Root | | Chushihua |     Query | 5 | update| INSERT into |
| 32 |      Root | | Chushihua |     Query | 5 | update| INSERT into |
| 33 |      Root | | Chushihua |     Query | 5 | update| INSERT into |

The above shows that there are indeed 4 threads (default) in the restore.
Further testing: Test some of the commonly used parameters
1: Back up the specified table (-T) and do not export the table structure (-M)

Copy Code code as follows:

./mydumper-u root-p 123456-h 192.168.220.252-p 3306-m-B test-t b,a,c,d,e,g,f,h,i-o/home/zhoujy/bak/

zhoujy@zhoujy:~/bak$ LS-LH
-rw-rw-r--1 zhoujy zhoujy 3.4K 2013-11-14 20:57 test.a.sql
-rw-rw-r--1 zhoujy zhoujy 1.6M 2013-11-14 20:57 test.b.sql
-rw-rw-r--1 zhoujy zhoujy 7.8M 2013-11-14 20:57 test.c.sql
-rw-rw-r--1 zhoujy zhoujy 1.7M 2013-11-14 20:57 test.d.sql
-rw-rw-r--1 zhoujy zhoujy 303K 2013-11-14 20:57 test.e.sql
-rw-rw-r--1 zhoujy zhoujy 517K 2013-11-14 20:57 test.f.sql
-rw-rw-r--1 zhoujy zhoujy 646K 2013-11-14 20:57 test.g.sql
-rw-rw-r--1 zhoujy zhoujy 394K 2013-11-14 20:57 test.h.sql
-rw-rw-r--1 zhoujy zhoujy 34K 2013-11-14 20:57 test.i.sql
-rw-rw-r--1 zhoujy zhoujy 2013-11-14 20:57 Metadata

2 Compressed backup file (-c), backup Binlog (-B), Regular expression backup table (-X)

Copy Code code as follows:

./mydumper-u root-p 123456-h 192.168.200.25-p 3306-m-c-b--regex=tmp.*-B test-o/home/zhoujy/bak/

DRWX------2 zhoujy zhoujy 4.0K 2013-11-14 21:16 binlog_snapshot
-rw-rw-r--1 zhoujy zhoujy 2013-11-14 21:16 Metadata
-rw-rw-r--1 zhoujy zhoujy 94K 2013-11-14 21:16 test.tmp_0808.sql.gz
-rw-rw-r--1 zhoujy zhoujy 75K 2013-11-14 21:16 test.tmp_0809.sql.gz
-rw-rw-r--1 zhoujy zhoujy 25K 2013-11-14 21:16 test.tmp_0813.sql.gz
-rw-rw-r--1 zhoujy zhoujy 208K 2013-11-14 21:16 test.tmp_0826.sql.gz
-rw-rw-r--1 zhoujy zhoujy 915 2013-11-14 21:16 test.tmp_0827.sql.gz
-rw-rw-r--1 zhoujy zhoujy 901 2013-11-14 21:16 test.tmp_0912.sql.gz
-rw-rw-r--1 zhoujy zhoujy 2.1K 2013-11-14 21:16 test.tmp_0916.sql.gz
-rw-rw-r--1 zhoujy zhoujy 622K 2013-11-14 21:16 test.tmp_0918_a.sql.gz
-rw-rw-r--1 zhoujy zhoujy 28M 2013-11-14 21:16 test.tmp_0918_ff.sql.gz

As shown above, the backup file is already compressed (uncompressed with gzip-d), and the tmp.* is backed up with all the tables that match, and the binary log is backed up into the Binlog_snapshot file, and is also compressed.
3) restore, the table exists first delete (-O): Here you need to note that the backup directory requires a table structure of the backup file.

Copy Code code as follows:

./myloader-u root-p 123456-h 192.168.200.25-p 3306-o-B test-d/home/zhoujy/bak/

For more parameter effects, please test yourself.
Final Test: Compare the test with mysqldump and Mydumper.

Copy Code code as follows:

#!/usr/bin/env python
#coding =utf-8
Import MySQLdb
Import OS
Import Sys
Import time

Backup = Os.system (')

Def mysqldump_data ():
T1 = Time.time ()
Backup = Os.system (' mysqldump--no-defaults-uroot-p123456-h192.168.200.25--default-character-set=utf8 Test >/ Home/zhoujy/test.bak ')
T2 = Time.time ()
t = Round (T2-T1)
Print "Mysqldump cost time%s"%t

Def mydumper_data ():
T1 = Time.time ()
Backup = Os.system (' mydumper-u root-p 123456-h 192.168.200.25-p 3306-b test-o ')
T2 = Time.time ()
t = Round (T2-T1)
Print "Mydumper cost time%s"%t

if __name__ = = ' __main__ ':
Mysqldump_data ()
Mydumper_data ()

2 databases were tested:

1:

Mysqldump cost time:162s
Mydumper cost time:61s

2:

Mysqldump cost time:483s
Mydumper cost time:337s

From the time above, Mydumper did improve the backup data, and so did the restore.

Summarize:
As you can see from the test analysis above, mydumper can improve the efficiency of backup restores, increase the number of disk-limited IO capabilities, and perform a disk IO assessment before use, and you may try this tool.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.