Backup and recovery of MySQL database

Source: Internet
Author: User
Tags flush mysql client one table file permissions

1, mysqldump
1.1 Backup
Mysqldump is a SQL-level backup mechanism that guides data tables into SQL script files and is relatively appropriate for upgrades between different versions of MySQL, which is also the most common method of backup. Now let's talk about some of the main parameters of mysqldump:

--compatible=name
It tells Mysqldump which database or older version of the MySQL server the exported data will be compatible with. Values can be ANSI, mysql323, MYSQL40, PostgreSQL, Oracle, MSSQL, DB2, MAXDB, No_key_options, no_tables_options, no_field_options, etc. , you use a few values to separate them with commas. Of course, it does not guarantee full compatibility, but is as compatible as possible.

--complete-insert,-c
The exported data takes a complete INSERT with the field name, which means that all values are written on one line. This can increase the efficiency of the insertion, but may be affected by the Max_allowed_packet parameter and cause the insertion to fail. Therefore, you need to use this parameter with caution, at least I do not recommend it.

--default-character-set=charset
Specifies what character set to use when exporting data, and if the datasheet is not the default Latin1 character set, you must specify this option when exporting, otherwise the garbled problem will occur after you import the data again.

--disable-keys
Tell mysqldump to add/*!40000 ALTER table Table DISABLE KEYS at the beginning and end of the INSERT statement. and/*!40000 ALTER table table to ENABLE the KEYS * *; statement, which greatly increases the speed of the INSERT statement because it rebuilds the index after all the data has been inserted. This option is only suitable for MyISAM tables.

--extended-insert = True|false
By default, Mysqldump opens the--complete-insert mode, so if you don't want to use it, use this option to set its value to false.

--hex-blob
Exports a binary string field using hexadecimal format. You must use this option if you have binary data. The field types that are affected are BINARY, VARBINARY, and blobs.

--lock-all-tables,-x
Commit the request to lock all tables in all databases before starting the export to ensure data consistency. This is a global read lock and automatically turns off the--single-transaction and--lock-tables options.

--lock-tables
Similar to--lock-all-tables, it locks the currently exported datasheet instead of locking the table under all the libraries at once. This option applies only to MyISAM tables, and if the Innodb table can use the--single-transaction option.

--no-create-info,-t
Exports only data without adding a CREATE TABLE statement.

--no-data,-d
Does not export any data, only the database table structure is exported.

--opt
This is just a shortcut option, equivalent to adding--add-drop-tables--add-locking--create-option--disable-keys--extended-insert--lock-tables- Quick--set-charset option. This option allows mysqldump to quickly export data, and the exported data can be quickly returned. This option is turned on by default, but can be disabled with--skip-opt. Note that if you run mysqldump without specifying the--quick or--opt option, the entire result set is placed in memory. Problems may occur if you export a large database.

--quick,-q
This option is useful when exporting large tables, forcing mysqldump to obtain records directly from a server query instead of having all the records cached in memory.

--routines,-r
Export stored procedures and custom functions.

--single-transaction
This option submits a BEGIN SQL statement before the data is exported, and the begin does not block any applications and guarantees the consistency state of the database at the time of export. It applies only to transaction tables, such as InnoDB and BDB. This option and the--lock-tables option are mutually exclusive, because lock tables will cause any pending transactions to be implicitly committed. To export a large table, you should use the--quick option in combination.

--triggers
The trigger is also exported. This option is enabled by default, and it is disabled with--skip-triggers.

For additional parameter details please refer to the manual, I usually use the following SQL to back up the MyISAM table:

/usr/local/mysql/bin/mysqldump-uyejr-pyejr--default-character-set=utf8--opt--extended-insert=false
--triggers-r--hex-blob-x db_name > Db_name.sql

Use the following SQL to back up the Innodb table:

/usr/local/mysql/bin/mysqldump-uyejr-pyejr--default-character-set=utf8--opt--extended-insert=false
--triggers-r--hex-blob--single-transaction db_name > Db_name.sql


1.2 Restore
A file backed up with Mysqldump is a SQL script that can be poured directly, and there are two ways to import the data.

Directly with MySQL client
For example:

/usr/local/mysql/bin/mysql-uyejr-pyejr Db_name < Db_name.sql


Using the SOURCE syntax
In fact, this is not a standard SQL syntax, but the functionality provided by the MySQL client, for example:

Source/tmp/db_name.sql;

You need to specify the absolute path of the file, and it must be a file that the Mysqld run user (for example, nobody) has permission to read.

2, Mysqlhotcopy
2.1 Backup
Mysqlhotcopy is a PERL program that was originally written by Tim Bunce. It uses LOCK tables, FLUSH tables, and CP or SCP to quickly back up databases. It is the quickest way to back up a database or a single table, but it can only run on the same machine as the database file (including the datasheet definition file, the data file, the index file). Mysqlhotcopy can only be used for backup MyISAM and can only be run on Unix-like and NetWare systems.

Mysqlhotcopy supports the one-time copying of multiple databases while also supporting regular expressions. Here are a few examples:

Root#/usr/local/mysql/bin/mysqlhotcopy-h=localhost-u=yejr-p=yejr db_name/tmp (Copy the database directory db_name to/tmp)
Root#/usr/local/mysql/bin/mysqlhotcopy-h=localhost-u=yejr-p=yejr db_name_1 ... db_name_n/tmp
ROOT#/USR/LOCAL/MYSQL/BIN/MYSQLHOTCOPY-H=LOCALHOST-U=YEJR-P=YEJR db_name./regex//tmp

For more detailed instructions, check the manual, or call the following command to view Mysqlhotcopy's help:

Perldoc/usr/local/mysql/bin/mysqlhotcopy

Note that to use mysqlhotcopy, you must have SELECT, RELOAD (to perform FLUSH TABLES) permissions, and you must also have permission to read the Datadir/db_name directory.


2.2 Restore
Mysqlhotcopy Backup is the entire database directory, use can be directly copied to the mysqld specified DataDir (here is/usr/local/mysql/data/) directory, but also attention to the issue of permissions, the following example:

ROOT#CP-RF db_name/usr/local/mysql/data/
Root#chown-r nobody:nobody/usr/local/mysql/data/(Convert db_name directory owner to Mysqld run user)
3. SQL Syntax Backup
3.1 Backup
The BACKUP table syntax is essentially the same as mysqlhotcopy, which is to lock the table and then copy the data file. It can achieve online backup, but the effect is not ideal, so it is not recommended. It copies only table-structured files and data files, and does not copy index files at the same time, so it is slower to recover. Example:

Back TABLE tbl_name to \ '/tmp/db_name/\ ';

Note that you must have file permissions to execute this SQL, and the directory/tmp/db_name/must be writable by the mysqld user, and the exported file cannot overwrite existing files to avoid security issues.


SELECT into outfile is the export of data to become a normal text file, you can customize the way the field interval to facilitate processing of these data. Example:

SELECT into outfile \ '/tmp/db_name/tbl_name.txt\ ' from Tbl_name;

Note that you must have file permissions to execute this SQL, and the file/tmp/db_name/tbl_name.txt must be writable by the mysqld user, and the exported file cannot overwrite existing files to avoid security issues.


3.2 Recovery
Files backed up with the backup table method, you can run the Restore table statement to recover the datasheet. Example:

RESTORE TABLE from \ '/tmp/db_name/\ ';

Permission requirements are similar to those described above.


Files backed up with the SELECT into OutFile method can run the LOAD data INFILE statement to recover the datasheet. Example:

LOAD DATA INFILE \ '/tmp/db_name/tbl_name.txt\ ' into TABLE tbl_name;

Permission requirements are similar to those described above. Before pouring the data, the data table must already exist. If you are concerned about duplication of data, you can add the Replace keyword to replace existing records or ignore them with the IGNORE keyword.

Add:

shell> mysqldump--quick db_name | gzip > db_name.contents.gz
(the file created in this example is in a compressed format).

The commands for resuming/transferring to another table are as follows:

Shell> Gunzip < db_name.contents.gz | MySQL db_name
Above command, for machines for *nix operating systems

4. Enable binary log (Binlog)
The binlog approach is relatively flexible, hassle-saving, and can also support incremental backups.

You must restart Mysqld when Binlog is enabled. First, close mysqld, open my.cnf, and add the following lines:

Server-id = 1
Log-bin = Binlog
Log-bin-index = Binlog.index

Then start mysqld on it. The running process will produce binlog.000001 and Binlog.index, the previous file is MYSQLD record all update operations on the data, the following file is all Binlog index, can not be easily deleted. For information about Binlog, please check the manual.


When you need a backup, you can execute the SQL statement first, let mysqld terminate the write to the current binlog, you can back up the file directly, so that you can achieve the purpose of incremental backup:

FLUSH LOGS;
You should also back up the Master.info and relay-log.info files if you are from the server in the backup replication system.


Binlog files that are backed up can be viewed using the tools provided by MySQL Mysqlbinlog, such as:

/usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001

The tool allows you to display all of the SQL statements under the specified database, and can also be limited to a time range, which is quite handy, and please see the Manual in detail.


When recovering, you can do this by using the following statement:

/usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001 | Mysql-uyejr-pyejr db_name

Executes the SQL statement that mysqlbinlog output directly as input.


If you have a spare machine, you might want to back it up in this way. As the slave machine performance requirements are relatively low, so low-cost, with low cost can realize incremental backup and can share a part of the data query pressure, why not?

5, direct backup data files
Compared to the previous methods, backup data files are the most direct, fast and convenient, the disadvantage is that the incremental backup can not be achieved. To ensure data consistency, the following SQL statements need to be executed before the back file:

FLUSH TABLES with READ LOCK;
That is, the data in memory is flushed to disk, while the data table is locked to ensure that no new data is written in the copy process. This method back up the data recovery is also very simple, directly copied back to the original database directory.


Note that for Innodb type tables, you also need to back up their log files, that is, ib_logfile* files. Because when the Innodb table is corrupted, you can rely on these log files to recover.

6. Backup strategy
For systems with a medium-level volume of business, the backup strategy can be as follows: The first full backup, an incremental backup every day, and a full backup once a week, which is repeated all the time. For important and busy systems, it may take a full backup every day, incremental backups per hour, or even more frequently. In order not to affect online business, the realization of on-line backup, and can be incremental backup, the best way is to use the master-slave replication mechanism (replication), the slave machine to do backup.

7. Data maintenance and disaster recovery
As a DBA (I am not currently, hehe), one of the most important tasks is to ensure that the data table is safe, stable, high-speed use. Therefore, you need to maintain your data tables on a regular basis. The following SQL statements are useful:

Check table or REPAIR table, checking or maintaining the MyISAM table
OPTIMIZE table, optimizing MyISAM tables
ANALYZE table, analysis of MyISAM tables

Of course, all of these commands start with tool Myisamchk and are not detailed here.


Innodb table you can increase indexing speed by executing the following statement to defragment:

ALTER TABLE tbl_name ENGINE = Innodb;

This is actually a NULL operation, seemingly doing nothing at all, and actually defragmenting the fragments again.


Commonly used MyISAM tables can be completed using the recovery method mentioned above. If the index is broken, you can use the Myisamchk tool to rebuild the index. And for the Innodb table, it's not so straightforward, because it keeps all the tables in one table space. However, Innodb has a check mechanism called Fuzzy checkpoint, as long as the log file is saved, you can fix the error according to the log file. You can add the following parameters in the My.cnf file to allow mysqld to automatically check the log files at startup:

Innodb_force_recovery = 4

See the manual for information about this parameter.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.