MySQL database backup based on Xtrabackup

Source: Internet
Author: User
Tags benchmark

First, xtrabackup backup recovery principle

When you back up the InnoDB table, xtrabackup several threads to copy the. ibd file for a stand-alone tablespace, and constantly monitor the changes in the redo log in this process and add it to your own transaction log file (Xtrabackup_logfile). In this process, the more physical writes that occur, the greater the xtrabackup_logfile. In the first prepare phase after the copy is completed, Xtrabackup uses a method similar to the InnoDB crash recovery, which restores the data file to the same state as the log file and rolls back the uncommitted transaction. If you need to back up files such as the MyISAM table and the INNODB table structure at the same time, you need to use flush tables with lock to obtain a global lock, start copying those files that no longer change, get the Binlog location, release the lock after the copy ends, and stop the redo Log for monitoring.

Many students are confused with the above understanding that the copy. ibd file is the same as the operating system copy files. In fact, this involves fractured page problem, he should be re-read (should also have retries, more than the backup is not successful).

In fact, this principle is very simple, understand the next doublewrite can be understood "This paragraph excerpt from the MySQL Technology Insider: InnoDB Storage Engine":
If the insert buffer to the INNODB storage engine is performance, then two writes to the InnoDB storage engine is the reliability of the data. When the database is down, it can happen that the database is writing a page, and this page is only part of the story (such as 16K pages, only the first 4K pages), which we call partial write invalidation (partial page write). Before the InnoDB storage engine used double write technology, there were cases where data loss was caused by partial write invalidation.

One might think that if a write fails, it can be recovered by redo the log. This is a way. It must be clear, however, that the redo log records the physical operation of the page, such as an offset of 800, which writes "aaaa" Records. If the page itself is corrupted, then it makes no sense to redo it. That is, before applying the redo log, we need a copy of the page, and when the write invalidation occurs, the page is restored by the duplicate of the page, and this is doublewrite.

Second, backup script

#!/bin/sh#======================================================================================== ##   Backup tool:##    percona-xtrabackup-2.2.6####  Backup strategy: ##    1, Make a full amount of backups 4 o'clock in the morning every day; ##    2, incremental backups every hour; ###==================================================== ==================================== path=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:/usr/local/sbin  ## db Backup Base Path backup_base_dir= "/data/mysql_backup"  ##  incremental backup, use the base directory list file ##  content format: Base Directory | Backup directory | Backup Type "FULL|INC" inc_base_list= "${backup_base_dir}/incremental_basedir_list.txt"  ##  Backup tool Path Xtrabackup_path= "/usr/local/xtrabackup/bin/innobackupex"  ## mysql configuration path mysql_cnf_path= "/etc/ MY.CNF " ##  Number of threads thread_cnt=6 #=================================================================== ===================== function print_help_info () {   echo  "---------------------- ----------------------------------------"&Nbsp;  echo  "Usage: $0 full | inc | help"    echo   "--------------------------------------------------------------"    echo  "    exit 1} [[ $# -lt 1 ]] && print_help_info[[ - d ${backup_base_dir} ]] | |  mkdir -p ${BACKUP_BASE_DIR} ##  allows only one copy to run to avoid cross-cutting of full-scale backups and incremental backups, and the likelihood of data confusion [[ -n  ' Ps uax | grep innobackupex | grep -v grep '  ]] &&  exit 1 current_bak_path= "${backup_base_dir}/" ' date +%f_%h-%m '  #===================== =================================================================== ##  Full-volume backup if [[  "$"  ==  "Full"  ]]; then   ${xtrabackup_path} --user=root -- Defaults-file=${mysql_cnf_path} --parallel=${thread_cnt} --no-timestamp ${current_bak_path}   echo  "Null|${current_bak_path}|full"  >> ${inc_base_list} ##   Incremental backup elif [[  "$"  ==  "Inc"  ]]; then   ##  If the base directory list file does not exist or is empty, you need to do a full backup    if [[ ! -f ${inc_base_list} | |   ' sed  '/^$/d '  ${inc_base_list} | wc -l '  -eq 0 ]]; then       ${XTRABACKUP_PATH} --user=root --defaults-file=${MYSQL_CNF_PATH}  --parallel=${thread_cnt} --no-timestamp ${current_bak_path}      echo   "Null|${current_bak_path}|full"  >> ${INC_BASE_LIST}    ##  There is no directory, you need to do a full backup to avoid incremental backup failure    elif [[  ' find ${backup_base_dir} -maxdepth  1 -type d | wc -l '  -eq 1 ]]; then       ${xtrabackup_path} --user=rooT --defaults-file=${mysql_cnf_path} --parallel=${thread_cnt} --no-timestamp ${current_bak_ path}      echo  "Null|${current_bak_path}|full"  >> ${INC_ base_list}    ##  incremental backup on the basis of the last backup    else       prev_backup_dir= ' sed  '/^$/d '  ${inc_base_list} | tail -1  |  awk -F  ' | '   ' {print $2} '       ${xtrabackup_path} --user=root -- defaults-file=${mysql_cnf_path} --parallel=${thread_cnt} --no-timestamp --incremental ${ current_bak_path} --incremental-basedir=${prev_backup_dir}      echo  " ${prev_backup_dir}|${current_bak_path}|inc " >> ${inc_base_list}   fi elif  [[  "$"  ==  "Help"  ]]; then   print_help_info else    print_help_infofi ##  Delete Data backup from 2 weeks ago delete_flag= ' date -d  "14 days ago"  + "%F" ' cd ${backup_base_dir} && rm -rf ${delete_flag}_*sed -i  "/${DELETE_ FLAG}/D " ${inc_base_list} #========================================================================= =============== # #The  end ##################################################################### #################### ##  need to add crontab information:##      (1), full-volume backup ##      00 04 * * * /data/scripts/mysql_backup.sh full >/dev/null  2>&1####      (2), incremental backup ##     00 * *  * * /data/scripts/mysql_backup.sh inc >/dev/null 2>&1################ ###########################################################################  ####################### ################################################################## ## db Data Recovery Steps:##     (1), Application benchmark # #     innobackupex --user=root --defaults-file=/etc/my.cnf --use-memory=8g  --apply-log --redo-only /data/mysql_backup/full####     (2), application of the first incremental backup # #     innobackupex --user=root --defaults-file=/etc/my.cnf --use-memory=8g  --apply-log --redo-only /data/mysql_backup/full --incremental-dir=/data/mysql_backup/inc_ one####     (3), applying a second incremental backup ##    innobackupex --user=root -- defaults-file=/etc/my.cnf --use-memory=8g --apply-log /data/mysql_backup/full -- incremental-dir=/data/mysql_backup/inc_two####     (4), re-apply the benchmark ##     innobackupex --user=root --defaults-file=/etc/my.cnf --use-memory=8g --apply-log / data/mysql_backup/full####     (5), Recovery ##    innobackupex --user=root --defaults-file=/etc/my.cnf -- copy-back /data/mysql_backup/full######################################################################## #################

Third, the effect shows

650) this.width=650; "title=" 11.png "alt=" wkiom1r4ngmx-eslaahwpepj9ec936.jpg "src=" http://s3.51cto.com/wyfs02/M02/ 54/21/wkiom1r4ngmx-eslaahwpepj9ec936.jpg "/>

650) this.width=650; "title=" 33.png "alt=" wkiol1r4nq2yineaaagh1omjpig080.jpg "src=" http://s3.51cto.com/wyfs02/M02/ 54/20/wkiol1r4nq2yineaaagh1omjpig080.jpg "/>

650) this.width=650; "title=" 22.png "alt=" wkiom1r4njig3c9eaailaleikta116.jpg "src=" http://s3.51cto.com/wyfs02/M02/ 54/21/wkiom1r4njig3c9eaailaleikta116.jpg "/>

This article from "The ideal of life is Perseverance" blog, please be sure to keep this source http://sofar.blog.51cto.com/353572/1584366

MySQL database backup based on Xtrabackup

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.