Use XtraBackup to perform Online Incremental backup and database restoration for MySQL

Source: Internet
Author: User
Tags install perl percona

Use XtraBackup to perform Online Incremental backup and database restoration for MySQL

1. Introduction to Percona Xtrabackup

1. Xtrabackup bin directory file Introduction

1) innobackupex

Innobackupex is a symbolic link of xtrabackup. innobackupex still supports all features and syntax as 2.2 version did, but is now deprecated and will be removed in next major release.

2) xtrabackup

A binary file compiled by C can be used to prepare MySQL database instance with MyISAM, InnoDB, and XtraDB tables.

3) xbcrypt

Used to encrypt or decrypt backup data

4) xbstream

Used to decompress Or compress compressed files in xbstream format

5) xbcloud

Utility used for downloading and uploading full or part of xbstream archive from/to cloud.

2. Percona XtraBackup info

It is an open-source and free MySQL Database Hot Backup Software that backs up InnoDB and XtraDB non-blocking databases (Table locks are also required for MyISAM backup)

  • You can achieve the following benefits: (https://www.percona.com/doc/percona-xtrabackup/2.3/intro.html)
  • Backups that complete quickly and reliably
  • Uninterrupted transaction processing during backups
  • Savings on disk space and network bandwidth
  • Automatic backup verification
  • Higher uptime due to faster restore time

Features

  • Create hot InnoDB backups without pausing your database
  • Make incremental backups of MySQL
  • Stream compressed MySQL backups to another server
  • Move tables between MySQL servers on-line
  • Create new MySQL replication slaves easily
  • Backup MySQL without adding load to the server
  • No need to stop the database for InnoDB Hot Backup
  • Incremental Backup MySQL
  • Stream compression to transfer to other servers
  • Online table movement
  • Easy to create master-slave Synchronization
  • The server load is not increased when MySQL is backed up.

3. The Xtrabackup tool supports Incremental backup of the InnoDB Storage engine. The working principle is as follows:

1. A redo/undo log file is maintained in InnoDB, which can also be called a transaction log file. Transaction logs store the modification of data records in each InnoDB table. When InnoDB is started, InnoDB checks the data file and transaction log, and performs two steps: it applies the committed transaction log (Roll Forward) to the data file, and roll back the modified data that has not been submitted.

2. Xtrabackup remembers log sequence number (LSN) at startup and copies all data files. The replication process takes some time, so if the data file is changed during this period, the database will be at a different time point. In this case, xtrabackup runs a background process to monitor transaction logs and copy the latest modifications from the transaction logs. Xtrabackup must perform this operation continuously because transaction logs rotate duplicate writes and can be reused. Therefore, xtrabackup records the changes to each data file in the transaction log from the start.

3. The above is the backup process of xtrabackup. The next step is the preparation (prepare) process. In this process, xtrabackup uses the previously copied transaction logs to perform disaster recovery for each data file (just like what mysql did when it was just started ). After this process is completed, the database can be restored. This process is implemented in the compiled binary program of xtrabackup. The innobackupex program allows us to back up MyISAM tables and frm files, which adds convenience and functionality. Innobackupex will start xtrabackup until xtrabackup copies the data file, and then execute flush tables with read lock to prevent new data from being written in, FLUSH MyISAM table data to the hard disk, and then copy the MyISAM data file, finally, release the lock.

4. the backup MyISAM and InnoDB tables will eventually be consistent. After the preparation (prepare) process ends, the InnoDB table data has been rolled forward to the end point of the backup, instead of rolling back to the point at the beginning of xtrabackup. This time point is the same as the time point for executing flush tables with read lock. Therefore, myisam table data is synchronized WITH InnoDB table data. Similar to Oracle, The prepare process of InnoDB can be called recover (recovery), and the data replication process of myisam can be called restore (restoration ).

5. Xtrabackup and innobackupex both provide features that are not mentioned above. Each function is described in detail in the manual. These tools provide streaming backup and incremental backup. By copying data files, copying log files, and submitting logs to data files (roll back) various composite backup methods are implemented.

Ii. Install xtrabackup

1. Installation

Yum-y install perl-devel libaio-devel
Yum-y install perl-DBI perl-DBD-MySQL perl-TermReadKey perl-devel perl-Time-HiRes
Cd/usr/local/src
Wget-c https://www.percona.com/downloads/XtraBackup/Percona-XtraBackup-2.3.2 \
Binary/tarball/percona-xtrabackup-2.3.2-Linux-x86_64.tar.gz
Tar-zxf percona-xtrabackup-2.3.2-Linux-x86_64.tar.gz
Cd percona-xtrabackup-2.3.2-Linux-x86_64/
Mkdir/usr/local/xtrabackup
Mv bin/usr/local/xtrabackup/
Ln-s/usr/local/xtrabackup/bin/*/usr/bin/

2. modify my. cnf

[Mysqld]
Datadir =/var/lib/mysql
Innodb_data_home_dir =/data/mysql/ibdata
Innodb_log_group_home_dir =/data/mysql/iblogs
Innodb_data_file_path = ibdata1: 10 M; ibdata2: 10 M: autoextend
Innodb_log_files_in_group = 2
Innodb_log_file_size = 1G

Iii. Backup and restoration of all databases

1. Backup

// Back up all databases
Innobackupex -- user = root -- password = 123456/data/backup/
// Single database backup
Innobackupex -- user = root -- password = 123456 -- database = backup_test/data/backup/
// Multiple databases
Innobackupex -- user = root -- password = 123456 -- include = 'dba. * | dbb. * '/data/backup/
// Multiple tables
Innobackupex -- user = root -- password = 123456 -- include = 'dba. tablea | dbb. tableb'/data/backup/
// Back up and compress the database
Log = zztx01 _ 'date + % F _ % H-% M-% s'. log
Db = zztx01 _ 'date before 0000f_0000h-0000m-0000s'.tar.gz
Innobackupex -- user = root -- stream = tar/data/backup 2>/data/backup/$ log | gzip 1>/data/backup/$ db
// However, note that manual decompression is required, and the-I parameter is added. Otherwise, all files cannot be decompressed, so it takes a long time to doubt.
// If an error occurs, add -- defaults-file =/etc/my. cnf.

2. Restore

Service mysqld stop
Mv/data/mysql/data/mysql_bak & mkdir-p/data/mysql
// -- The apply-log command is used to start the mysql service on a backup.
Innobackupex -- defaults-file =/etc/my. cnf -- user = root -- apply-log/data/backup/2015-09-18_16-35-12
// -- Copy-back command to copy data, indexes, and logs from the backup directory to the specified initial location in my. cnf File
Innobackupex -- defaults-file =/etc/my. cnf -- user = root -- copy-back/data/backup/2015-09-18_16-35-12
Chown-R mysql. mysql/data/mysq
Service mysqld start

Iv. Incremental backup and Restoration
1. Create a test database and table
Create database backup_test; // create a database
Create table 'backup '(// CREATE a TABLE
'Id' int (11) not null AUTO_INCREMENT,
'Name' varchar (20) not null default '',
'Create _ time' timestamp not null default CURRENT_TIMESTAMP,
'Del 'tinyint (1) not null default '0 ',
Primary key ('id ')
) ENGINE = myisam default charset = utf8 AUTO_INCREMENT = 1;
2. Incremental Backup
# -- Incremental: incremental Backup folder
# -- Incremental-dir: Specifies the incremental Backup Type.
// The first backup
Mysql> insert into backup (name) VALUES ('xx'), ('xxxx'); // INSERT data
Innobackupex -- user = root -- incremental-basedir =/data/backup/2015-09-18_16-35-12 -- incremental/data/backup/
// Back up again
Mysql> insert into backup (name) VALUES ('test'), ('testd'); // INSERT data in
Innobackupex -- user = root -- incremental-basedir =/data/backup/2015-09-18_18-05-20 -- incremental/data/backup/
3. View Incremental Backup Record Files
[Root @ localhost 2015-09-18_16-35-12] # cat xtrabackup_checkpoints // files in the full backup directory
Backup_type = full-prepared
From_lsn = 0 // The start of full backup is 0
To_lsns = 23853959
Last_lsns = 23853959
Compact = 0
[Root @ localhost 2015-09-18_18-05-20] # cat xtrabackup_checkpoints // files in the first Incremental Backup Directory
Backup_type = incremental
From_lsn = 23853959
To_lsns = 23854112
Last_lsns = 23854112
Compact = 0
[Root @ localhost 2015-09-18_18-11-43] # cat xtrabackup_checkpoints // file in the second Incremental Backup Directory
Backup_type = incremental
From_lsn = 23854112
To_lsns = 23854712
Last_lsns = 23854712
Compact = 0
After the Incremental backup is complete, delete the backup_test database and drop the database backup_test. In this way, you can compare and restore the database.
4. incremental Restoration
There are two steps
A. prepare
Innobackupex -- apply-log/path/to/BACKUP-DIR
In this case, the data can be accessed and used by the program. You can use the-use-memory option to specify the memory used to speed up the process. The default value is 100 MB;
B. recover
Innobackupex -- copy-back/path/to/BACKUP-DIR
Read variables such as datadir/innodb_data_home_dir/innodb_data_file_path from my. cnf
Copy the MyISAM table first, then the innodb table, and finally the logfile; -- data-dir directory must be empty
Start merging
Innobackupex -- apply-log -- redo-only/data/backup/2015-09-18_16-35-12
Innobackupex -- apply-log -- redo-only -- incremental/data/backup/2015-09-18_16-35-12 -- incremental-dir =/data/backup/2015-09-18_18-05-20
Innobackupex -- apply-log -- redo-only -- incremental/data/backup/2015-09-18_16-35-12 -- incremental-dir =/data/backup/2015-09-18_18-11-43
#/Data/backup/2015-09-18_16-35-12 full backup directory
#/Data/backup/2015-09-18_18-05-20 directory generated by the first Incremental backup
#/Data/backup/2015-09-18_18-11-43 directory generated by the second Incremental backup
Restore data
Service mysqld stop
Innobackupex -- copy-back/data/backup/2015-09-18_16-35-12
Service mysqld start

5. Description of common innobackup Parameters

-- Defaults-file
Same as the -- defaults-file parameter of xtrabackup

-- Apply-log
Encapsulation of the -- prepare parameter of xtrabackup

-- Copy-back
Copy the backup data file to the datadir of the MySQL server during data recovery;

-- Remote-host = HOSTNAME
Store the backup data to the process server through ssh;

-- Stream = [tar]
Backup file output format. tar4ibd is used for tar. This file can be obtained from XtarBackup binary files. if -- stream = tar is specified during backup, the Directory of the tar4ibd file must be in $ PATH (because tar4ibd is used for compression, the file can be obtained in the binary package of XtraBackup ).
When you use stream = tar for backup, your xtrabackup_logfile may be temporarily stored in the/tmp directory, if the number of concurrent writes during Backup is large, the xtrabackup_logfile may be large (5 GB +) and may be full of your/tmp directory, you can solve this problem by specifying the directory with the -- tmpdir parameter.

-- Tmpdir = DIRECTORY
When -- remote-host or -- stream is specified, the temporary directory of transaction logs is stored. The tmpdir directory specified in the MySQL configuration file is used by default.

-- Redo-only -- apply-log group,
Only redo logs are supported during forced backup and rollback is skipped. This is necessary for Incremental backup.

-- Use-memory = #
This parameter is used when prepare is used to control the memory used by the innodb instance during prepare.

-- Throttle = IOS
Same as the -- throttle parameter of xtrabackup

-- Sleep = is used for ibbackup. It specifies the number of milliseconds for stopping copying every 1 MB of data during backup. It is also used to minimize the impact on normal services during Backup, for details, refer to the ibbackup manual;

-- Compress [= LEVEL]
Compresses the volume of backup data. Only ibbackup is supported, and xtrabackup is not implemented yet;

-- Include = REGEXP
Encapsulation of the xtrabackup parameter -- tables also supports ibbackup. Back up the database tables, for example, -- include = "test. *", which means to back up all the tables in the test database. If full backup is required, this parameter is omitted. If two tables test1 and test2 under the test database need to be backed up, the parameter is written as -- include = "test. test1 | test. test2 ". You can also use wildcards, such as -- include = "test. test *".

-- Databases = LIST
Lists the databases to be backed up. If this parameter is not specified, all databases including MyISAM and InnoDB tables will be backed up;

-- Uncompress
Decompress the backup data file and support ibbackup. xtrabackup has not yet implemented this function;

-- Slave-info,
An xtrabackup_slave_info file is generated in the backup slave database and in the -- slave-info Backup Directory. The main log file and offset are saved here. The file content is similar: change master to MASTER_LOG_FILE = '', MASTER_LOG_POS = 0

-- Socket = SOCKET
Specify the location of mysql. sock so that the backup process can log on to mysql.

MySQL management-using XtraBackup for Hot Backup

MySQL open-source backup tool Xtrabackup backup deployment

MySQL Xtrabackup backup and recovery

Use XtraBackup to implement MySQL master-slave replication and quick deployment [master-slave table lock-free]

Install and use Xtrabackup from Percona to back up MySQL

XtraBackup details: click here
XtraBackup: click here

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.