MySQL database data backup and recovery _ MySQL

Source: Internet
Author: User
Tags types of tables
MySQL database data backup and recovery details bitsCN.com

This article discusses the backup and recovery mechanisms of MySQL and how to maintain data tables, including the two most important types of tables: MyISAM and Innodb. the MySQL version designed in this article is 5.0.22.

Currently, MySQL supports the following free BACKUP tools: mysqldump and mysqlhotcopy. you can also use SQL syntax to back up: BACKUP TABLE or SELECT INTO OUTFILE, or back up binary logs (binlog ), you can also directly copy data files and related configuration files. MyISAM tables are saved as files, so they are relatively easy to back up. the methods mentioned above can be used. All Innodb tables are stored in the same data file ibdata1 (multiple files or independent tablespace files), which is relatively difficult to back up, the free solution can be copying data files, backing up binlogs, or using mysqldump.

1. mysqldump

1.1 Backup

Mysqldump adopts an SQL-level backup mechanism. it imports data tables into SQL script files and is suitable for upgrading between different MySQL versions. this is also the most common backup method. Here are some main parameters of mysqldump:

-- Compatible = name: it tells mysqldump that the exported data will be compatible with the database or the old MySQL server. The values can be ansi, mysql323, mysql40, postgresql, oracle, mssql, db2, maxdb, no_key_options, no_tables_options, and no_field_options. separate them with commas. Of course, it does not guarantee full compatibility, but is as compatible as possible.

-- Complete-insert: The data exported by-c adopts the complete INSERT method containing the field name, that is, all values are written in one row. This can improve the insertion efficiency, but may be affected by the max_allowed_packet parameter, resulting in insertion failure. Therefore, you need to use this parameter with caution. at least I do not recommend this parameter.

-- Default-character-set = charset specifies the character set used to export data. if the data table does not use the default latin1 character set, this option must be specified during export, otherwise, garbled characters will occur after the data is imported again.

-- Disable-keys: tells mysqldump to add/* at the beginning and end of the INSERT statement /*! 40000 alter table table disable keys */; and /*! 40000 alter table table enable keys */; statement, which greatly improves the speed of the insert statement because it re-creates the index after all data is inserted. This option is only applicable to MyISAM tables.

-- Extended-insert = true | false by default, mysqldump enables the -- complete-insert mode. if you do not want to use it, set this option to false.

-- Hex-blob exports binary string fields in hexadecimal format. This option is required if binary data exists. The affected field types include BINARY, VARBINARY, and BLOB.

-- Lock-all-tables,-x submits a request before starting export to lock all tables in all databases to ensure data consistency. This is a global read lock and the -- single-transaction and -- lock-tables options are automatically disabled.

-- Lock-tables: it is similar to -- lock-all-tables, but instead of locking all tables in the database. This option is only applicable to MyISAM tables. for Innodb tables, you can use the -- single-transaction option.

-- No-create-info,-t only exports data, without adding the create table statement.

-- No-data,-d: only the database table structure is exported without exporting any data.

-- Opt is just a quick option, it is equivalent to adding the -- add-drop-tables -- add-locking -- create-option -- disable-keys -- extended-insert -- lock-tables -- quick -- set-charset option at the same time. This option allows mysqldump to export data quickly and export data back quickly. This option is enabled by default, but can be disabled with -- skip-opt. Note: If the -- quick or -- opt option is not specified when running mysqldump, the entire result set is stored in the memory. If you export a large database, problems may occur.

-- Quick,-q this option is useful when exporting large tables. it forces mysqldump to directly output records from server queries rather than cache all records into memory.

-- Routines,-R: export stored procedures and user-defined functions.

-- Single-transaction: This option submits a begin SQL statement before exporting data. BEGIN does not block any applications and ensures database consistency during export. It is only applicable to transaction tables, such as InnoDB and BDB. This option and the -- lock-tables option are mutually exclusive, because lock tables will implicitly commit any pending transactions. To export a large table, use the -- quick option.

-- Triggers: export the trigger at the same time. This option is enabled by default. use -- skip-triggers to disable it.

For details about other parameters, see the manual. I usually use the following SQL to back up the MyISAM table:

/Usr/local/mysql/bin/mysqldump-uyejr-pyejr

-- Default-character-set = utf8 -- opt -- extended

-Insert = false -- triggers-R -- hex-blob-x db_name

> Db_name. SQL

Use the following SQL to back up the Innodb table:

/Usr/local/mysql/bin/mysqldump-uyejr-pyejr -- default

-Character-set = utf8 -- opt -- extended-insert =

False -- triggers-R -- hex-blob -- single-transaction db_name

> Db_name. SQL

1.2 restore

The file backed up with mysqldump is an SQL script that can be directly imported. There are two ways to import data.

Directly use the mysql client, for example:

/Usr/local/mysql/bin/mysql-uyejr-pyejr db_name <db_name. SQL

Using the SOURCE syntax is not a standard SQL syntax, but a function provided by the mysql client, for example, SOURCE/tmp/db_name. SQL;

Here, you must specify the absolute path of the file and the file must be read by the mysqld running user (such as nobody.

2. mysqlhotcopy

2.1 Backup

Mysqlhotcopy is a PERL program originally written by Tim Bunce. It uses lock tables, flush tables, and cp or scp to quickly back up databases. It is the fastest way to back up a database or a single table, but it can only run on the machine where the database file (including the data table definition file, data file, and index file) is located. Mysqlhotcopy can only be used to back up MyISAM and can only run on Unix-like and NetWare systems.

Mysqlhotcopy supports copying multiple databases at a time and regular expression. The following are examples:

Root #/usr/local/mysql/bin/mysqlhotcopy-h = localhost-u = yejr-p = yejr db_name/tmp

(Copy the Database Directory db_name to/tmp)

Root #/usr/local/mysql/bin/mysqlhotcopy-h = localhost-u = yejr-p = yejr db_name_1...

Db_name_n/tmproot #/usr/local/mysql/bin/mysqlhotcopy-h

= Localhost-u = yejr-p = yejr db_name./regex/tmp

For more detailed usage, refer to the manual or call the following command to view the help of mysqlhotcopy:

Perldoc/usr/local/mysql/bin/mysqlhotcopy

Note: To use mysqlhotcopy,

You must have the SELECT and RELOAD permissions (to execute flush tables) and be able to read the datadir/db_name directory.

2.2 restore

Mysqlhotcopy backs up the entire database directory, which can be copied directly to the datadir (/usr/local/mysql/data/) directory specified by mysqld during use, pay attention to permission issues as follows:

Root # cp-rf db_name/usr/local/mysql/data/root # chown-R nobody: nobody/usr/local/mysql/data/

(Change the owner of the db_name directory to the mysqld running user)
  

3. SQL syntax backup

3.1 Backup

The backup table syntax works almost the same way as mysqlhotcopy. it locks tables and copies data files. It can achieve online backup, but the effect is not ideal, so it is not recommended. It only copies table structure files and data files, and does not copy index files at the same time, so recovery is slow. Example:

Back table tbl_name TO '/tmp/db_name /';

Note: You must have the FILE permission to execute this SQL statement, and the directory/tmp/db_name/must be writable by the mysqld user. the exported FILE cannot overwrite the existing FILE, to avoid security issues.

Select into outfile is to export data to a common text file. you can customize the field interval to process the data conveniently.

Example:

Select into outfile '/tmp/db_name/tbl_name.txt' FROM tbl_name;

Note: You must have the FILE permission to execute this SQL statement, and the FILE/tmp/db_name/tbl_name.txt must be writable by the mysqld user. the exported FILE cannot overwrite the existing FILE, to avoid security issues.

3.2 Recovery

To RESTORE a data TABLE, run the restore table statement. Example: restore table from '/tmp/db_name/'; the permission requirements are similar to described above.

If you use the select into outfile method to back up a file, you can run the load data infile statement to restore the DATA table. Example:

Load data infile '/tmp/db_name/tbl_name.txt' into table tbl_name;

The permission requirements are similar to those described above. Before you import data, the data table already exists. If you are worried that data will repeat, you can add the REPLACE keyword to REPLACE existing records or use the IGNORE keyword to IGNORE them.

Supplement:

Shell> mysqldump -- quick db_name | gzip> db_name.contents.gz

(The file created in this example is in the compressed format ).

The command for restoring/transferring to another instance is as follows:

Shell> gunzip <db_name.contents.gz | mysql db_name

The preceding command is applicable to * nix operating system machines.

4. enable binlog)

The binlog method is more flexible and labor-saving, and supports incremental backup.

Mysqld must be restarted when binlog is enabled. First, close mysqld, open my. cnf, and add the following lines:

Server-id = 1log-bin = binloglog-bin-index = binlog. index

Start mysqld. Binlog.000001 and binlog. index will be generated during the running process. The previous file is the update operation of mysqld to record all data, and the subsequent file is the index of all binlogs, which cannot be easily deleted. For information about binlog, see the manual.

When you need to back up data, you can execute an SQL statement to stop mysqld from writing data to the current binlog, and then back up the file directly. this will achieve the purpose of incremental backup: flush logs; if it is a slave server in the backup replication system, you should also back up master.info and relay-log.info files.

The binlog file backed up can be viewed using mysqlbinlog, a tool provided by MySQL, for example:

/Usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001

This tool allows you to display all the SQL statements in the specified database, and you can also limit the time range, which is quite convenient. for details, please refer to the manual.

You can use a statement similar to the following to restore data:

/Usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001 | mysql-uyejr-pyejr db_name

Execute the SQL statement output by mysqlbinlog directly as the input.

If you have an idle machine, use this method for backup. Because the performance requirements of the server load balancer are not that high, the cost is low. with low costs, incremental backup can be achieved and the data query pressure can be shared. why not?

5. back up data files directly

Compared with the previous methods, backing up data files is the most direct, fast, and convenient. The disadvantage is that incremental backup is basically not supported. To ensure data consistency, run the following SQL statement before the back-to-back File: FLUSH TABLES WITH READ LOCK; that is, refresh the data in the memory to the disk and LOCK the data table, to ensure that no new data is written during the copy process. In this way, the backup data recovery is also very simple, just copy it back to the original database directory.

Note: For an Innodb table, you also need to back up its log file, that is, the ib_logfile * file. Because when the Innodb table is corrupted, these log files can be recovered.

6. backup policy for medium-level business systems, the backup policy can be set as follows: the first full backup, one incremental backup every day, and one full backup every week. For important and busy systems, full backup may be required once a day, incremental backup once an hour, or even more frequently. To achieve online backup and incremental backup without affecting online services, the best way is to use the master-slave replication mechanism (replication) to back up data on the slave machine.

7. data maintenance and disaster recovery as a DBA (I am not yet, haha), one of the most important tasks is to ensure that data tables can be used securely, stably, and quickly. Therefore, you must maintain your data tables on a regular basis. The following SQL statements are useful: CHECK TABLE or REPAIR TABLE, CHECK or maintain the OPTIMIZE TABLE of MyISAM TABLE, OPTIMIZE the ANALYZE TABLE of MyISAM TABLE, and ANALYZE the MyISAM TABLE. of course, the above commands can be started with myisamchk, which is not described here.

In the Innodb table, you can execute the following statements to sort the shards and increase the index speed:

Alter table tbl_name ENGINE = Innodb;

This is actually a NULL operation. on the surface, nothing is done. In fact, fragments are reorganized.

In general, the MyISAM table can be completed using the restoration method mentioned above. If the index is broken, you can use myisamchk to recreate the index. For the Innodb table, it is not so straightforward, because it stores all the tables in a tablespace. However, Innodb has a check mechanism called fuzzy checkpoint. Once the log file is saved, the error can be fixed based on the log file. You can add the following parameters to the my. cnf file to enable mysqld to automatically check log files at startup:

Innodb_force_recovery = 4

For more information about this parameter, see the manual.

8. summarize and back up data, so we have to adopt a proper backup policy. this is a small part of what a DBA does. everything is difficult at the beginning. let's start from now on!

BitsCN.com

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.