"MySQL" Using Xtrabackup online incremental backup and recovery database

Source: Internet
Author: User
Tags dba wrapper install perl percona

First, Introduction1, Xtrabackup have so several files Innobackupex

Innobackupex is a symbolic link to xtrabackup. Innobackupex still supports all features and syntax as 2.2 version do, but are now deprecated and would be removed in next Major release.

Xtrabackup

A binary file compiled from C that can be used to build MySQL database instance with MyISAM, InnoDB, and XtraDB tables

Xbcrypt

Data used to encrypt or decrypt a backup

Xbstream

Compressed files used to decompress or compress the Xbstream format

Xbcloud

Utility used for downloading and uploading full or part of Xbstream archive from/to cloud.

2. Percona Xtrabackup Info

is open source free MySQL database hot backup software, it can be InnoDB and XTRADB storage Engine database non-blocking backup (for MyISAM backup also need to add a table lock)

You can achieve the following benefits: (https://www.percona.com/doc/percona-xtrabackup/2.3/intro.html)

    • Backups that complete quickly and reliably
    • Uninterrupted transaction processing during backups
    • Savings on disk space and network bandwidth
    • Automatic Backup Verification
    • Higher uptime due to faster restore time
features
    • Create Hot InnoDB backups without pausing your database
    • Make incremental backups of MySQL
    • Stream compressed MySQL backups to another server
    • Move tables between MySQL servers on-line
    • Create new MySQL Replication slaves easily
    • Backup MySQL without adding load to the server
    • No need to stop database for InnoDB hot standby
    • Incremental backup MySQL
    • Stream compression to transfer to other servers
    • Move a table online
    • Easy to create master-slave synchronization
    • Server load is not increased when backing up MySQL
3. The Xtrabackup tool supports incremental backup of the InnoDB storage engine and works as follows

1, inside the InnoDB will maintain a redo/undo log file, also can be called transaction log file. The transaction log stores record modifications for each of the InnoDB table data. When InnoDB starts, InnoDB examines the data file and transaction log and performs two steps: it applies (rolls forward) the transaction log to the data file that has been committed, and rolls back the data that has been modified but not committed.

2. Xtrabackup remembers log sequence number (LSN) at startup, and copies all data files. The copy process takes some time, so if the data file changes during this period, the database will be at a different point in time. At this point, Xtrabackup runs a background process that monitors the transaction log and copies the latest modifications from the transaction log. Xtrabackup must continue to do this because the transaction log is rotated repeatedly and the transaction log can be reused. As a result, Xtrabackup has kept track of changes to every data file in the transaction log since it started.

3, the above is the Xtrabackup backup process. Next is the prepare (prepare) process, in which the Xtrabackup uses the previously copied transaction log to perform a disaster recovery on each data file (just as it did when MySQL was started). When this process is finished, the database can be restored and restored, and the process is implemented in Xtrabackup's compiled binaries. The program Innobackupex allows us to back up MyISAM and frm files, thus adding convenience and functionality. Innobackupex starts xtrabackup until xtrabackup copies the data file, then executes flush TABLES with READ lock to block the new write-in and brushes the MyISAM table data to the hard disk. Then copy the MyISAM data file and finally release the lock.

4. The backup MyISAM and InnoDB tables will eventually be consistent, and at the end of the preparation (prepare) process, the InnoDB table data has been rolled forward to the point where the entire backup ended, rather than rolling back to the point at which the Xtrabackup was first started. This point in time is the same as the point at which the flush TABLES with READ lock is executed, so the MyISAM table data is synchronized with the InnoDB table data. Like Oracle, INNODB's prepare process can be called recover (recovery), and the MyISAM data replication process can be called restore.

5, Xtrabackup and Innobackupex both tools offer a number of features that are not mentioned in the previous article. Each feature is described in detail in the manual. In brief, these tools provide streaming (streaming) backups, incremental (incremental) backups, and so on, by replicating data files, copying log files, and committing logs to data files (roll forward) to achieve a variety of composite backup methods.

Second, installation Xtrabackup

1. Installation

Yum-y Install Perl perl-devel libaio libaio-devel yum-y install  perl-dbi  perl-dbd-mysql  Perl-termreadkey Perl-devel perl-time-hirescd/usr/local/srcwget-c https://www.percona.com/downloads/XtraBackup/ Percona-xtrabackup-2.3.2/binary/tarball/percona-xtrabackup-2.3.2-linux-x86_64.tar.gztar-zxf PERCONA-XTRABACKUP-2.3.2-LINUX-X86_64.TAR.GZCD Percona-xtrabackup-2.3.2-linux-x86_64/mkdir  /usr/local/ Xtrabackupmv bin  /usr/local/xtrabackup/ln-s/usr/local/xtrabackup/bin/*/usr/bin/

2, modify the MY.CNF

[Mysqld]  Datadir=/var/lib/mysqlinnodb_data_home_dir =/data/mysql/ibdatainnodb_log_group_home_dir =/data/mysql/ Iblogsinnodb_data_file_path=ibdata1:10m;ibdata2:10m:autoextendinnodb_log_files_in_group = 2innodb_log_file_size = 1G
three, all database backup and restore 1. Backup
All database backup Innobackupex--user=root--password=123456/data/backup///single database backup Innobackupex--user=root--password=123456 --database=backup_test/data/backup///Multi-Library innobackupex--user=root--password=123456--include= ' dba.*|dbb.* '/data/ backup///Multi-table Innobackupex--user=root--password=123456--include= ' dba.tablea|dbb.tableb '/data/backup///database backup and compress log =zztx01_ ' Date +%f_%h-%m-%s '. logdb=zztx01_ ' Date +%f_%h-%m-%s '. Tar.gzinnobackupex--user=root--stream=tar/data/ Backup  2>/data/backup/$log | gzip 1>/data/backup/$DB//But note that the decompression needs to be done manually, and add-I parameters, or can not extract all the files, puzzled for a long time// If there are errors you can add  
2, restore
Service mysqld Stopmv/data/mysql/data/mysql_bak && mkdir-p/data/mysql//--apply-log Options command is ready to start the MySQL service on a backup Innobackupex--defaults-file=/etc/my.cnf--user=root--apply-log/data/backup/2015-09-18_16-35-12//--copy-back The command of the option copies data from the backup directory, index, log to the initial location specified in the my.cnf file Innobackupex--defaults-file=/etc/my.cnf--user=root--copy-back/data/backup /2015-09-18_16-35-12 chown-r mysql.mysql/data/mysqservice mysqld Start
Iv. incremental backup and restore 1, creating test databases and tables
2. Incremental backup
#--incremental: Incremental backup folder #--incremental-dir: for which incremental backup//First backup mysql> INSERT into backup (name) VALUES (' xx '), (' xxxx ') ; Inserting Data Innobackupex  
3. View incremental BACKUP log files

After the incremental backup is done, delete the Backup_test database and drop the DB backup_test, so you can compare the restored

4. Incremental restore

Divided into two steps

A.prepare

Innobackupex--apply-log/path/to/backup-dir

At this point the data can be used by the program access, you can use the-use-memory option to specify the memory used to speed up the progress, default 100M;

B.recover

Innobackupex--copy-back/path/to/backup-dir

Read variables such as Datadir/innodb_data_home_dir/innodb_data_file_path from MY.CNF, copy MyISAM table first, then InnoDB table, and finally logfile ; The--data-dir directory must be empty

Innobackupex  --user=root--apply-log--redo-only/data/backup/2015-09-18_16-35-12 innobackupex  --user=root --apply-log--redo-only/data/backup/2015-09-18_16-35-12--incremental-dir=/data/backup/2015-09-18_18-05-20 Innobackupex  --user=root--apply-log--redo-only/data/backup/2015-09-18_16-35-12--incremental-dir=/data/ backup/2015-09-18_18-11-43 #/data/backup/2015-09-18_16-35-12, full backup directory #/data/backup/2015-09-18_ 18-05-20, the first incremental backup produced by the directory #/data/backup/2015-09-18_18-11-43, the second incremental backup produced by the directory

Incremental restore here, not finished, and the most important step is to do a full restore, the principle of incremental backup is, the incremental directory of data, integrated into the full variable directory, and then in progress, the total data volume of the restore.

Five, innobackup commonly used parameter description

--defaults-file
--defaults-file parameters of the same xtrabackup

--apply-log
Encapsulation of the--prepare parameters of the Xtrabackup

--copy-back
Copy the backup data file to the MySQL server's datadir when doing the data recovery;

--remote-host=hostname
The backup data is stored on the process server via SSH;

--stream=[tar]
Back up the file output format, using TAR4IBD in tar, which is available in the Xtarbackup binary file. If you have specified--stream=tar at the time of backup, the directory where the Tar4ibd file is located must be in $path ( Because TAR4IBD is used to compress, the file can be obtained in binary packages of xtrabackup.
When using the parameter Stream=tar backup, your xtrabackup_logfile may be temporarily placed in the/tmp directory, if you back up when the concurrent write large words xtrabackup_logfile may be large (5g+), it will probably fill up your/ TMP directory, you can solve this problem by specifying the directory with the parameter--tmpdir.

--tmpdir=directory
When a--remote-host or--stream is specified, the transaction log is temporarily stored in a directory that defaults to the temporary directory specified in the MySQL configuration file tmpdir

--redo-only--apply-log Group,
Force backup log only redo, skip rollback. This is necessary when doing an incremental backup.

--use-memory=#
This parameter is used when prepare, controlling the amount of memory used by InnoDB instances when prepare

--throttle=ios
--throttle parameters of the same xtrabackup

--sleep= is used for ibbackup, specify each backup 1M data, process stop copy how many milliseconds, but also in order to minimize the impact on the normal business during the backup, you can view the Ibbackup manual;

--compress[=level]
Import row compression for backup data, only support Ibbackup,xtrabackup has not been implemented;

--include=regexp
The encapsulation of the Xtrabackup parameter--tables also supports Ibbackup. Back up the included library table, for example:--include= "test.*", which means that you want to back up all the tables in the test library. Omit this parameter if you need a full backup, or if you need to back up the 2 tables under test Library: Test1 and Test2, write:--include= "Test.test1|test.test2". You can also use wildcard characters, such as:--include= "test.test*".

--databases=list
Lists the databases that need to be backed up, and if this parameter is not specified, all database containing the MyISAM and InnoDB tables will be backed up;

--uncompress
Unzip the backed up data file, support Ibbackup,xtrabackup has not implemented this function;

--slave-info,
Backup from the library, plus the--slave-info backup directory will generate more than one Xtrabackup_slave_info file, here will save the main log file and offset, the file content similar to: Change MASTER to Master_log_file= ", Master_log_pos=0

--socket=socket
Specify the location of the Mysql.sock so that the backup process logs on to MySQL.

Reference
Http://www.percona.com/doc/percona-xtrabackup/2.1/

"MySQL" Using Xtrabackup online incremental backup and recovery database

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.