This example for you to share the Xtrabackup backup to restore the specific code for your reference, the specific content as follows
Install Xtrabackup with RPM package
# # Install Dependency Pack
yum-y Install perl perl-devel libaio libaio-devel perl-time-hires perl-dbd-mysql rsync
# install LIBEV4 pack
RPM-IVH libev4-4.15-7.1.x86_64.rpm
# # installation percona-xtrabackup
RPM-IVH percona-xtrabackup-24-2.4.4-1.el6.x86 _64.rpm
Create a MySQL account for the backup operation.
# # Xtrabackup snapup Create user
' backuper ' @ ' localhost ' identified by ' backup@123 ';
GRANT SELECT, RELOAD, PROCESS, show DATABASES, SUPER, LOCK TABLES, REPLICATION CLIENT, show VIEW, EVENT on *.* to ' Backupe R ' @ ' localhost ';
FLUSH privileges;
Create a backup file directory
Mkdir/export/mysql_backup
Make a full backup of the database and, if you are backing up from the library, add the--slave_info–safe-slave-backup parameter to obtain the Binlog information for the Replication Master Library. The default backup saves the current library's Binlog information to the Xtrabackup_binlog_info file, while the--slave_info parameter saves the Binlog information for the replication Master Library to Xtrabackup_slave_info. If you are using Backup to add replication, be careful to select the correct binlog information.
# # for full backup and compression
Innobackupex--defaults-file= "/export/servers/mysql/etc/my.cnf" \
--host= "localhost" \
- port=3358 \
--user= "Backuper" \
--password= "backup@123" \
--socket= "/export/data/mysql/tmp/mysql.sock" "\
--stream=tar \
/export/mysql_backup/" | gzip-> "/EXPORT/MYSQL_BACKUP/MYSQL_FULL_BACKUP.TAR.GZ"
Note Checking to see if the command was executed successfully, using a tar backup can effectively reduce the size of the backup file, but compression and compression can severely consume CPU resources.
Assuming that the above operation is done, copy the backup file to the new server and put it under/export/mysql_backup/
Extract Backup Files
# # Switch to backup compressed file directory
# It is strongly recommended that you create an empty directory to hold the backup files to avoid the conflict between the uncompressed files and other files
cd/export/mysql_backup/
# #加压备份
tar XVF mysql_full_backup.tar.gz
Data that is backed up with a xtrabackup backup is the data from the backup end point, and the transaction log generated during the backup is saved to the backup file Xtrabackup_logfile, so the backup file needs to be logged and the uncommitted transaction is rolled back. --apply-log also generates new transaction log files based on BACKUP-MY.CNF.
#使用apply-log parameters to handle transaction log
Innobackupex--apply-log/export/mysql_backup/
Because the target directory is empty when the restore is requested, and even if the newly installed MySQL server will also exist some such as system database data files, so before restoring the backup to MySQL, need to be the current MySQL data directory under the situation, for the insurance period, it is recommended to use MV command to change the name.
#备份当前MySQL的数据目录
Mv/export/data/mysql/export/data/mysql_bak
The Xtrbackup backup operation only backs up data-related files and does not back up files or directories such as error log/slow log, while the previous operation MV may remove some of the file directories, so you need to manually create them again.
# # Create MySQL Data directory
mkdir-p/export/data/mysql/tmp/export/data/mysql/data/export/data/mysql/dumps/export/data/ Mysql/log
Using the Move-back parameter to move the data to the MySQL data directory, you can also use the--copy-back parameter to copy without copying.
Innobackupex--defaults-file= "/EXPORT/SERVERS/MYSQL/ETC/MY.CNF"--move-back/export/mysql_backup/
Data copied to the MySQL data directory, you need to modify the owner of these data files to ensure that the MySQL service has permission to manipulate.
#修改MySQL数据目录的权限
chown-r Mysql:mysql/export/data/mysql
Finally start the MySQL service and check that the data is normal
The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.