MySQL database backup and Restore methods collection recommended _mysql

Source: Internet
Author: User
Tags dba flush mysql version one table types of tables

Translator: Ye Jinlong (Email:), Source:http://imysql.cn/, reprint please specify AS/translator and source, and can not be used for commercial purposes, offenders must investigate.

Date: 2006/10/01

This article discusses MySQL's backup and recovery mechanism, and how to maintain data tables, including the two main types of tables: MyISAM and Innodb , the MySQL version of the design is 5.0.22.

The free backup tools currently supported by MySQL are: mysqldump、mysqlhotcopy and can also be backed up with SQL syntax: BACKUP TABLE or SELECT INTO OUTFILE , alternatively, 二进制日志(binlog) a backup 直接拷贝数据文件和相关的配置文件 . MyISAM tables are stored in the form of files, so they are relatively easy to back up, and several of the above mentioned methods are available. InnodbAll tables are kept in the same data file ibdata1 (and possibly multiple files, or separate tablespace files), which is relatively difficult to back up, and the free scheme can be 拷贝数据文件 , 备份 binlog or used mysqldump .

1, mysqldump

1.1 Backup

mysqldumpis a SQL-level backup mechanism that guides data tables into SQL script files and is relatively appropriate when upgrading between different versions of MySQL, which is also the most common method of backup.
Here are mysqldump some of the main parameters:

  • --compatible=name

    It tells Mysqldump which database or older version of the MySQL server the exported data will be compatible with. Values can be ansi、mysql323、mysql40、postgresql、oracle、mssql、db2、maxdb、no_key_options、no_tables_options、no_field_options equal, use a few values, and separate them with commas. Of course, it does not guarantee full compatibility, but is as compatible as possible.

  • --complete-insert,-c

    The exported data takes the complete way of including the field name INSERT , which means that all values are written on one line. This can improve insertion efficiency, but may be affected by max_allowed_packet parameters that cause insertions to fail. Therefore, you need to use this parameter with caution, at least I do not recommend it.

  • --default-character-set=charset

    Specifies what character set to use when exporting data, and if the datasheet is not the default latin1 character set, you must specify this option when exporting, otherwise the garbled problem will occur after you import the data again.

  • --disable-keys

    Telling mysqldump to INSERT add and statement at the beginning and end of a statement /*!40000 ALTER TABLE table DISABLE KEYS */; /*!40000 ALTER TABLE table ENABLE KEYS */; greatly increases the speed of the INSERT statement because it rebuilds the index after all the data has been inserted. This option is only suitable for MyISAM tables.

  • --extended-insert = True|false

    By default, the mysqldump mode is turned on, --complete-insert so if you don't want to use it, use this option to set its value to be OK false .

  • --hex-blob

    Exports a binary string field using hexadecimal format. You must use this option if you have binary data. The types of fields that are affected are BINARY、VARBINARY、BLOB .

  • --lock-all-tables,-x

    Commit the request to lock all tables in all databases before starting the export to ensure data consistency. This is a global read lock, and auto shutdown --single-transaction and --lock-tables options.

  • --lock-tables

    It is --lock-all-tables similar to locking the currently exported datasheet, rather than locking the table under all the libraries at once. This option applies only to MyISAM tables, if the Innodb table is available with --single-transaction options.

  • --no-create-info,-t

    Export data only, without adding CREATE TABLE statements.

  • --no-data,-d

    Does not export any data, only the database table structure is exported.

  • --opt

    This is just a shortcut option, which is equivalent to adding options at the same time --add-drop-tables --add-locking --create-option --disable-keys --extended-insert --lock-tables --quick --set-charset . This option allows you to mysqldump quickly export the data, and the exported data can be quickly returned. This option is turned on by default, but can be --skip-opt disabled. Note that if the run mysqldump does not specify --quick or --opt option, the entire result set is placed in memory. Problems may occur if you export a large database.

  • --quick,-q

    This option is useful when exporting large tables, forcing the mysqldump direct output of records from a server query instead of having all the records cached in memory after they are taken.

  • --routines,-r

    Export stored procedures and custom functions.

  • --single-transaction

    This option submits an SQL statement before the data is exported BEGIN and BEGIN does not block any applications and guarantees the consistency state of the database at the time of export. It applies only to transaction tables, such as InnoDB and BDB .
    This option and --lock-tables option are mutually exclusive because LOCK TABLES they cause any pending transactions to be implicitly committed.
    If you want to export a large table, you should use the option in combination --quick .

  • --triggers

    The trigger is also exported. This option is enabled by default and is --skip-triggers disabled by disabling it.

For additional parameter details please refer to the manual, I usually use the following SQL to back up the MyISAM table:

/USR/LOCAL/MYSQL/BIN/MYSQLDUMP-UYEJR-PYEJR \
--default-character-set=utf8--opt--extended-insert=false \
--triggers-r--hex-blob-x db_name > Db_name.sql

Use the following SQL to back up the Innodb table:

/USR/LOCAL/MYSQL/BIN/MYSQLDUMP-UYEJR-PYEJR \
--default-character-set=utf8--opt--extended-insert=false \
--triggers-r--hex-blob--single-transaction db_name > Db_name.sql

In addition, if you want to implement an online backup, you can also use --master-data parameters to implement the following:

/USR/LOCAL/MYSQL/BIN/MYSQLDUMP-UYEJR-PYEJR \
--default-character-set=utf8--opt--master-data=1 \
Single-transaction--flush-logs db_name > Db_name.sql

It is only at the beginning of the moment to request the lock table, and then the refresh Binlog, and then add the statement in the exported file CHANGE MASTER to specify the current backup Binlog location, if you want to restore the file to slave, you can use this method to do.

1.2 Restore

A mysqldump backed-up file is a SQL script that can be poured directly, and there are two ways to import the data.

    • Directly with the mysql client

      For example:

      /usr/local/mysql/bin/mysql-uyejr-pyejr Db_name < Db_name.sql
      

    • Using the SOURCE syntax

      In fact, this is not a standard SQL syntax, but mysql a client-provided functionality, such as:

      Source/tmp/db_name.sql;
      

      Here you need to specify the absolute path to the file, and it must be a file that is mysqld read by a user who is running, such as nobody.

2, Mysqlhotcopy

2.1 Backup

mysqlhotcopyis a PERL program, originally written by Tim Bunce. It uses LOCK TABLES、FLUSH TABLES and cp or scp to quickly back up the database. It is the quickest way to back up a database or a single table, but it can only run on the same machine as the database file (including the datasheet definition file, the data file, the index file). mysqlhotcopycan only be used for backups and MyISAM can only be run on 类Unix and NetWare on the system.

mysqlhotcopySupports one-time copies of multiple databases while also supporting regular expressions. Here are a few examples:

ROOT#/USR/LOCAL/MYSQL/BIN/MYSQLHOTCOPY-H=LOCALHOST-U=YEJR-P=YEJR \
db_name/tmp )
root#/usr/local/mysql/bin/mysqlhotcopy-h=localhost-u=yejr-p=yejr \
db_name_1 ... db_name_n/tmp
ROOT#/USR/LOCAL/MYSQL/BIN/MYSQLHOTCOPY-H=LOCALHOST-U=YEJR-P=YEJR \
db_name./regex//tmp

For more detailed instructions, check the manual, or call the following command to view mysqlhotcopy the Help:

Perldoc/usr/local/mysql/bin/mysqlhotcopy

Note that you mysqlhotcopy must have permission to use it, SELECT、RELOAD(要执行 FLUSH TABLES) and you must also have permission to read the datadir/db_name directory.

2.2 Restore

mysqlhotcopyBacked up by the entire database directory, use can be directly copied to mysqld the specified datadir (here is /usr/local/mysql/data/) directory can be, while attention to the issue of permissions, the following example:

ROOT#CP-RF db_name/usr/local/mysql/data/
mysqld Run user)

3. SQL Syntax Backup

3.1 Backup

BACKUP TABLEmysqlhotcopyThe syntax is similar to the working principle, is to lock the table, and then copy the data file. It can achieve online backup, but the effect is not ideal, so it is not recommended. It copies only table-structured files and data files, and does not copy index files at the same time, so it is slower to recover.
Example:

Back TABLE tbl_name to '/tmp/db_name/';

Note that you must have FILE permission to execute this SQL, and the directory /tmp/db_name/ must be mysqld writable by the user, and the exported file cannot overwrite existing files to avoid security issues.

SELECT INTO OUTFILEis to export the data to become a normal text file, you can customize the method of field spacing to facilitate processing of these data.
Example:

SELECT * into outfile '/tmp/db_name/tbl_name.txt ' from Tbl_name;

Note that you must have FILE permission to execute this SQL, and the file /tmp/db_name/tbl_name.txt must be mysqld writable by the user, and the exported file cannot overwrite existing files to avoid security issues.

3.2 Recovery

BACKUP TABLEa file backed up by a method that can run a RESTORE TABLE statement to recover a datasheet.
Example:

RESTORE TABLE from '/tmp/db_name/';

Permission requirements are similar to those described above.

SELECT INTO OUTFILEa file backed up by a method that can run a LOAD DATA INFILE statement to recover a datasheet.
Example:

LOAD DATA INFILE '/tmp/db_name/tbl_name.txt ' into TABLE tbl_name;

Permission requirements are similar to those described above. Before pouring the data, the data table must already exist. If you are concerned about duplication of data, you can add REPLACE keywords to replace existing records or IGNORE ignore them with a keyword.

4. Enable binary log (Binlog)

binlogThe method used is relatively flexible, hassle-saving, and can support incremental backups.

You binlog must reboot when enabled mysqld . First, close mysqld , open my.cnf , and add the following lines:

Server-id = 1
log-bin = binlog
Log-bin-index = Binlog.index

And then start mysqld on it. During the run, binlog.000001 as well as binlog.index , the previous file is mysqld to record all the update operations on the data, the following file is all binlog the index, can not be easily deleted. binlogPlease check the manual for information.

When you need a backup, you can execute the SQL statement so that you mysqld can back up the file directly by terminating the current write, so that you can achieve the purpose of the binlog incremental backup:

FLUSH LOGS;

You should also back up the master.info and relay-log.info files If you are from the server in the backup replication system.

The backed-up binlog files can be viewed with the tools provided by MySQL mysqlbinlog , such as:

/usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001

The tool allows you to display all of the SQL statements under the specified database, and can also be limited to a time range, which is quite handy, and please see the Manual in detail.

When recovering, you can do this by using the following statement:

/usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001 | Mysql-uyejr-pyejr db_name

mysqlbinlogexecutes the output SQL statement directly as input.

If you have a spare machine, you might want to back it up in this way. As slave the machine performance requirements are relatively not so high, so low cost, with low cost can realize incremental backup and can share a part of the data query pressure, why not?

5, direct backup data files

Compared to the previous methods, backup data files are the most direct, fast and convenient, the disadvantage is that the incremental backup can not be achieved. To ensure data consistency, the following SQL statements need to be executed before the back file:

FLUSH TABLES with READ LOCK;

That is, the data in memory is flushed to disk, while the data table is locked to ensure that no new data is written in the copy process. This method back up the data recovery is also very simple, directly copied back to the original database directory.

Note that for a Innodb type table, you also need to back up its log files, which are ib_logfile* files. Because when the Innodb table is corrupted, you can rely on these log files to recover.

6. Backup strategy

For systems with a medium-level volume of business, the backup strategy can be as follows: The first full backup, an incremental backup every day, and a full backup once a week, which is repeated all the time. For important and busy systems, it may take a full backup every day, incremental backups per hour, or even more frequently. In order not to affect online business, the realization of on-line backup, and can be incremental backup, the best way is to use the master-slave replication mechanism ( replication ), to slave do backup on the machine.

7. Data maintenance and disaster recovery

As a DBA (I am not currently, hehe), one of the most important tasks is to ensure that the data table is safe, stable, high-speed use. Therefore, you need to maintain your data tables on a regular basis. The following SQL statements are useful:

Check table or REPAIR table, examine or maintain MyISAM table
OPTIMIZE table, optimize MyISAM table
ANALYZE table, analyze MyISAM tables

Of course, all of these commands start with a tool myisamchk that is not detailed here.

InnodbTable you can increase indexing speed by executing the following statement to defragment:

ALTER TABLE tbl_name ENGINE = Innodb;

This is actually an NULL operation that does nothing on the surface to actually rearrange the fragments.

Commonly used MyISAM tables can be completed using the recovery method mentioned above. If the index is broken, you can use the myisamchk tool to rebuild the index. And for a Innodb table, it's not so straightforward, because it keeps all the tables in one table space. However Innodb , there is a check mechanism called 模糊检查点 , as long as the log file is saved, can be based on the log file to fix the error. You can add the following parameters to the my.cnf file to mysqld automatically check the log files at startup:

Innodb_force_recovery = 4

See the manual for information about this parameter.

8, summary

Do a good backup of the data, set the right backup strategy, this is a DBA to do a small part of things, the beginning of the difficult, starting from now!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.