MySQL Backup series (2)--mysqldump Backup (full + incremental) scheme operation record

Source: Internet
Author: User
Tags mysql backup mysql login

In the daily operations, the backup of MySQL database is extremely important, in case the database table is lost or corrupted, can recover the data in time.

On-line Database backup scenario:
Perform a full backup every Sunday, and then perform a mysqldump incremental backup every 1 o'clock in the afternoon.

Below is a detailed description of this backup scenario:
1.MySQLdump Incremental Backup Configuration
The prerequisite for performing an incremental backup is that MySQL turns on the Binlog log feature and joins in MY.CNF
Log-bin=/opt/data/mysql-bin
The string after "Log-bin=" is recorded in the log directory, and is generally recommended on a different disk than the MySQL data directory.

123456789 ------------------------- ---------------------------------------------------------- mysqldump >        Export Data mysql <            Import data   (or use source command import data, first switch to corresponding library before import)   Note one detail: If mysqldump exports a library of data, the export file is A.sql, then MySQL imports this data into the new empty library. If the new library name is inconsistent with the old library name, you need to change the old library name in the A.sql file to the new library name, source ------------------------------------------------------------------------------- ----

2.MySQLdump Incremental Backup
Assume that Sunday 1 o'clock in the afternoon performs a full-scale backup for the MyISAM storage engine.
[Email protected] ~]# MySQLdump--lock-all-tables--flush-logs--master-data=2-u root-p Test > BACKUP_SUNDAY_1_PM.SQ L

Replace--lock-all-tables with--single-transaction for InnoDB
--flush-logs to end the current log, generate a new log file;
The--master-data=2 option will record the name of the new log file after the full backup in output SQL.

References for later recovery, such as the output of a backup SQL file, contain:
Change MASTER to Master_log_file= ' mysql-bin.000002′, master_log_pos=106;

3.MySQLdump Incremental Backup Additional instructions:
If Mysqldump plus –delete-master-logs clears the previous log to free up space. However, if the server is configured as a mirrored replication master, it is dangerous to delete the MySQL binary log with Mysqldump–delete-master-logs because the contents of the binary log may not be fully processed from the server. In this case, it is more secure to use PURGE MASTER logs.

Use Mysqladmin flush-logs daily to create a new log and end the previous log write process. And put the previous log backup, for example in the previous example to save the data directory under the log file mysql-bin.000002, ...

1. Recovering a full backup
Mysql-u Root-p < Backup_sunday_1_pm.sql

2. Recovering an incremental backup
Mysqlbinlog mysql-bin.000002 ... | Mysql-u root-p Note that the recovery process will also be written to the log file, if the amount of data is large, it is recommended to turn off the log function

--compatible=name
It tells MySQLdump that the exported data will be compatible with which database or which old version of the MySQL server. Values can be ANSI, MySQL323, MYSQL40, PostgreSQL, Oracle, MSSQL, DB2, MAXDB, No_key_options, no_tables_options, no_field_options, etc. , to use a few values, separate them with commas. Of course, it is not guaranteed to be fully compatible, but is as compatible as possible.

--complete-insert,-c
The exported data takes the full INSERT method that contains the field name, that is, all the values are written in one line. This can improve insertion efficiency, but may be affected by the Max_allowed_packet parameter, causing the insert to fail. Therefore, it is prudent to use this parameter, at least I do not recommend it.

--default-character-set=charset
Specifies which character set to export the data in, and if the data table is not in the default Latin1 character set, you must specify this option when exporting, or you will have garbled problems after importing the data again.

--disable-keys
Tell MySQLdump to add/*!40000 ALTER table Table DISABLE KEYS at the beginning and end of the INSERT statement */; and/*!40000 ALTER table Table ENABLE KEYS */; statement, which can greatly increase the speed of the INSERT statement, because it rebuilds the index after all the data has been inserted. This option is only suitable for MyISAM tables.

--extended-insert = True|false
By default, MySQLdump turns on--complete-insert mode, so if you don't want to use it, use this option to set its value to false.

--hex-blob
Export binary string fields using hexadecimal format. This option must be used if you have binary data. The field types affected are BINARY, VARBINARY, BLOB.

--lock-all-tables,-x
Before starting the export, the commit request locks all tables in all databases to ensure data consistency. This is a global read lock, and the--single-transaction and--lock-tables options are turned off automatically.

--lock-tables
It is similar to--lock-all-tables, but locks the currently exported data table instead of locking down all the tables in the library at once. This option applies only to the MyISAM table, if the Innodb table is available with the--single-transaction option.

--no-create-info,-t
Exports only data without adding a CREATE TABLE statement.

--no-data,-d
No data is exported, only the database table structure is exported.
Mysqldump--no-data--databases mydatabase1 mydatabase2 mydatabase3 > Test.dump
Only the table structure will be backed up. --databases indicates the database to be backed up on the host.

--opt
This is just a shortcut option, which is equivalent to adding--add-drop-tables--add-locking--create-option--disable-keys--extended-insert--lock-tables at the same time-- Quick--set-charset option. This option allows MySQLdump to export data quickly, and the exported data can be quickly returned. This option is turned on by default, but can be disabled with--skip-opt. Note that if you run MySQLdump without specifying the--quick or--opt option, the entire result set is placed in memory. Problems can occur if you export a large database.

--quick,-q
This option is useful when exporting large tables, forcing MySQLdump to cache records directly from the server query instead of getting all the records back into memory.

--routines,-r
Export stored procedures and custom functions.

--single-transaction
This option submits a begin SQL statement before exporting the data, and begin does not block any applications and ensures consistent state of the database at the time of export. It applies only to transaction tables, such as InnoDB and BDB. This option and the--lock-tables option are mutually exclusive because lock tables causes any pending transactions to be implicitly committed. To export large tables, you should use the--quick option together.

--triggers
Export the trigger at the same time. This option is enabled by default and is disabled with--skip-triggers.

Cross-host backup
Use the following command to copy the Sourcedb on host1 to Host2 Targetdb, provided the HOST2 database has been created on the TARGETDB host:
-C indicates data transfer between hosts using compression
Mysqldump--host=host1--opt sourcedb| MySQL--host=host2-c targetdb

Timed backup with Linux cron commands
For example, you need to back up all the databases on a host at 1:30 every day and compress the dump file to GZ format
1 * * * mysqldump-u Root-ppassword--all-databases | gzip >/mnt/disk2/database_ ' Date ' +%m-%d-%y '. sql.gz

A complete shell script backup example of the MySQL database. such as backing up the database opspc
[Email protected] ~]# vim/root/backup.sh
#!bin/bash
echo "Begin backup MySQL Database"
Mysqldump-u Root-ppassword opspc >/home/backup/mysqlbackup-' Date +%y-%m-%d '. sql
echo "Your database backup successfully completed"

[Email protected] ~]# CRONTAB-E
1 * * */bin/bash-x/root/backup.sh >/dev/null 2>&1

Mysqldump full-scale backup +mysqlbinlog binary log Incremental backup
1) recovering data from a mysqldump backup file will lose the updated data from the backup point, so you will also need to combine the Mysqlbinlog binary log incremental backup.
First make sure that the Binlog log feature is turned on. Include the following configuration in MY.CNF to enable binary logging:
[Mysqld]
Log-bin=mysql-bin

2) The mysqldump command must be brought with the--flush-logs option to generate a new binary log file:
mysqldump--single-transaction--flush-logs--master-data=2 > Backup.sql
where parameter--master-data=[0|1|2]
0: Do not record
1: Record as change master statement
2: Change master statement recorded as comment

The following two documents are available for the specific operation of the mysqldump full-volume + incremental backup scheme:
Data recovery operation description After database mis-deletion
Explains MySQL's binlog log and recovers data using Binlog logs

--------------------------------------------------------------------------
Share your own mysqldump full-volume and incremental backup scripts

Application Scenarios:
1) Incremental backup from Monday to Saturday 3 o'clock in the morning, the mysql-bin.00000* is copied to the specified directory;
2) Full-scale backup uses mysqldump to export all databases, execute every Sunday 3 o'clock in the morning, and delete the mysq-bin.00000* left last week, and the backup operation to MySQL will remain in the Bak.log file.

Script implementation:
1) Full backup script (assuming MySQL login password is 123456; Note the command path in the script):
[Email protected] ~]# vim/root/mysql-fullybak.sh
#!/bin/bash
# program
# Use mysqldump to Fully backup MySQL data per week!
# History
# Path
Bakdir=/home/mysql/backup
Logfile=/home/mysql/backup/bak.log
Date= ' Date +%y%m%d '
begin= ' date + '%y year%M month%d day%h:%m:%s "'
CD $BakDir
dumpfile= $Date. sql
gzdumpfile= $Date. sql.tgz
/usr/local/mysql/bin/mysqldump-uroot-p123456--quick--events--all-databases--flush-logs--delete-master-logs-- Single-transaction > $DumpFile
/BIN/TAR-ZVCF $GZDumpFile $DumpFile
/bin/rm $DumpFile
last= ' date + '%y year%M month%d day%h:%m:%s "'
echo Start: $Begin end: $Last $GZDumpFile succ >> $LogFile
CD $BakDir/daily
/bin/rm-f *

2) Incremental backup script (MySQL data storage path in the script is/home/mysql/data, according to their actual situation to adjust)
[Email protected] ~]# vim/root/mysql-dailybak.sh
#!/bin/bash
# program
# Use the CP to backup MySQL data everyday!
# History
# Path
bakdir=/home/mysql/backup/daily//Incremental backup copy the target directory of mysql-bin.00000*, manually create this directory in advance
Bindir=/home/mysql/data//mysql's Data Catalog
Logfile=/home/mysql/backup/bak.log
Binfile=/home/mysql/data/mysql-bin.index//mysql's index file path, placed in the data directory
/usr/local/mysql/bin/mysqladmin-uroot-p123456 Flush-logs
#这个是用于产生新的mysql-bin.00000* File
Counter= ' wc-l $BinFile |awk ' {print '} '
Nextnum=0
#这个for循环用于比对 $Counter, $NextNum These two values to determine whether the file is present or up to date
For file in ' Cat $BinFile '
Do
Base= ' basename $file '
#basename用于截取mysql-bin.00000* file name, remove the./mysql-bin.000005 front.
nextnum= ' expr $NextNum + 1 '
If [$NextNum-eq $Counter]
Then
Echo $base skip! >> $LogFile
Else
dest= $BakDir/$base
if (test-e $dest)
#test-E is used to detect if the target file exists, and it writes exist! to $logfile.
Then
Echo $base exist! >> $LogFile
Else
CP $BinDir/$base $BakDir
echo $base copying >> $LogFile
Fi
Fi
Done
Echo ' date + '%y year%M month%d day%h:%m:%s "' $Next bakup succ! >> $LogFile

3) Set the crontab task to execute the backup script. The incremental backup script is executed first, and then the full backup script is executed:
[Email protected] ~]# CRONTAB-E
#每个星期日凌晨3:00 Perform a full backup script
0 3 * * 0/bin/bash-x/root/mysql-fullybak.sh >/dev/null 2>&1
#周一到周六凌晨3:00 do an incremental backup
0 3 * * 1-6/bin/bash-x/root/mysql-dailybak.sh >/dev/null 2>&1

4) manually perform the above two scripts to test the backup effect
[Email protected] backup]# pwd
/home/mysql/backup
[Email protected] backup]# mkdir daily
[email protected] backup]# LL
Total 4
Drwxr-xr-x. 2 root root 4096 Nov 11:29 daily
[email protected] backup]# ll daily/
Total 0

Perform an incremental backup script first
[Email protected] backup]# sh/root/mysql-dailybak.sh
[email protected] backup]# LL
Total 8
-rw-r--r--. 1 root root 121 Nov 11:29 Bak.log
Drwxr-xr-x. 2 root root 4096 Nov 11:29 daily
[email protected] backup]# ll daily/
Total 8
-rw-r-----. 1 root root 11:29 mysql-binlog.000030
-rw-r-----. 1 root root 11:29 mysql-binlog.000031
[email protected] backup]# cat Bak.log
mysql-binlog.000030 copying
mysql-binlog.000031 copying
mysql-binlog.000032 skip!
November 29, 2016 11:29:32 Bakup succ!

Then perform a full-scale backup script
[Email protected] backup]# sh/root/mysql-fullybak.sh
20161129.sql
[email protected] backup]# LL
Total 152
-rw-r--r--. 1 root root 145742 Nov 20161129.sql.tgz
-rw-r--r--. 1 root root 211 Nov Bak.log
Drwxr-xr-x. 2 root root 4096 Nov daily
[email protected] backup]# ll daily/
Total 0
[email protected] backup]# cat Bak.log
mysql-binlog.000030 copying
mysql-binlog.000031 copying
mysql-binlog.000032 skip!
November 29, 2016 11:29:32 Bakup succ!
Start: November 29, 2016 11:30:38 end: November 29, 2016 11:30:38 20161129.sql.tgz succ

MySQL Backup series (2)--mysqldump Backup (full + incremental) scheme operation record

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.