Xtrabackup stream backup, incremental and compression Summary

Source: Internet
Author: User

Xtrabackup stream backup, incremental and compression Summary

[Problem background]

1. databases with large MySQL files that need to be compressed. For example, in a 500 GB database, the xtrabackup backup file is GB, and then compressed and packaged after the backup is complete, which is equivalent to three reads and writes to the file.

2. For slave databases with poor disk I/O performance, the entire process will last for several hours. Sometimes I/O can be directly obtained to 100%, resulting in high latency.

3. xtrabackup xbstream stream backup. After direct backup, compress directly through the pipeline. This reduces the I/O of the original three times to one time, and the entire use time is shortened.

[Applicable scenarios]

1. The database file is large and the database to be compressed; this results in a long backup time and requires that the slave database delay be limited to too large.

[Online use example]

The master station has a group of machines that have passed the test. simple comparison:

1. xtrabackup full backup, no compression, direct tar package 170 GB, takes 80 minutes; xtrabackup stream backup + libz2 compression, file size 46 GB, takes 40 minutes.

2. IO util comparison during the backup process:

Regular backup without compression:

Xbstream backup + Compression

[Basic usage principle]

1. Backup steps

Because there is no backup directory after xbstream + compression backup, xtrabackup can specify the -- extra-lsndir Directory, which only stores the xtrabackup_checkpoints file backed up, -- incremental-basedir indicates the extra-lsndir directory of the previous day.

1) Full BACKUP command:

Innobackupex -- user = dump -- password = xxx -- host = 127.0.0.1 -- slave-info -- stream = xbstream -- extra-lsndir = "$ lsndir" $ baseDir 2> "$ backupLog" | lbzip2-kv-n 10> "$ backup_file" 2> "$ backupLog"

2) Incremental Backup command:

Innobackupex -- user = dump -- password = xxx -- host = 127.0.0.1 -- slave-info -- stream = xbstream -- extra-lsndir = "$ lsndir" -- incremental-basedir = "$ last_lsndir "$ baseDir 2>" $ backupLog "| lbzip2-kv-n 10>" $ backup_file "2>" $ backupLog"

2. Restore steps

The file generated after the backup is compressed, so you need to decompress the file and use xbstream to open the fluid file (this step is more than the original restoration ).

1) decompress the compressed file: lbzip2-dkv-n 10 2013-07-14_bak_full.bz2

2) decompress the liquid file to the directory: xbstream-x <2013-07-14_bak_full-C/work/bak/2013-07-14_full/

The subsequent steps are the same as the original restoration steps.

MySQL management-using XtraBackup for Hot Backup

MySQL open-source backup tool Xtrabackup backup deployment

MySQL Xtrabackup backup and recovery

Use XtraBackup to implement MySQL master-slave replication and quick deployment [master-slave table lock-free]

Install and use Xtrabackup from Percona to back up MySQL

XtraBackup details: click here
XtraBackup: click here

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.