Mysql quick backup _ MySQL

Source: Internet
Author: User
Mysql quick backup bitsCN.com

1.1 Backup

Mysqldump adopts an SQL-level backup mechanism. it imports data tables into SQL script files and is suitable for upgrading between different MySQL versions. this is also the most common backup method.

Here are some main parameters of mysqldump:

-- Compatible = name

It tells mysqldump that the exported data will be compatible with the database or the old MySQL server. The values can be ansi, mysql323, mysql40, postgresql, oracle, mssql, db2, maxdb, no_key_options, no_tables_options, and no_field_options. separate them with commas. Of course, it does not guarantee full compatibility, but is as compatible as possible.

-- Complete-insert,-c

The exported data adopts the complete INSERT method containing the field name, that is, all values are written in one row. This can improve the insertion efficiency, but may be affected by the max_allowed_packet parameter, resulting in insertion failure. Therefore, you need to use this parameter with caution. at least I do not recommend this parameter.

-- Default-character-set = charset

Specifies the character set used for data export. if the data table does not use the default latin1 character set, this option must be specified during data export. otherwise, garbled characters will occur after data is imported again.

-- Disable-keys

Tell mysqldump to add/* at the beginning and end of the INSERT statement /*! 40000 alter table table disable keys */; and /*! 40000 alter table table enable keys */; statement, which greatly improves the speed of the insert statement because it re-creates the index after all data is inserted. This option is only applicable to MyISAM tables.

-- Extended-insert = true | false

By default, mysqldump enables the -- complete-insert mode, so if you do not want to use it, use this option to set its value to false.

-- Hex-blob

Export binary string fields in hexadecimal format. This option is required if binary data exists. The affected field types include BINARY, VARBINARY, and BLOB.

-- Lock-all-tables,-x

Before starting the export, submit a request to lock all tables in all databases to ensure data consistency. This is a global read lock and the -- single-transaction and -- lock-tables options are automatically disabled.

-- Lock-tables

It is similar to -- lock-all-tables, but instead of locking all tables in the database. This option is only applicable to MyISAM tables. for Innodb tables, you can use the -- single-transaction option.

-- No-create-info,-t

Only export data without adding the create table statement.

-- No-data,-d

Only the database table structure is exported without exporting any data.

-- Opt

This is just a quick option, it is equivalent to adding the -- add-drop-tables -- add-locking -- create-option -- disable-keys -- extended-insert -- lock-tables -- quick -- set-charset option at the same time. This option allows mysqldump to export data quickly and export data back quickly. This option is enabled by default, but can be disabled with -- skip-opt. Note: If the -- quick or -- opt option is not specified when running mysqldump, the entire result set is stored in the memory. If you export a large database, problems may occur.

-- Quick,-q

This option is useful when exporting large tables. it forces mysqldump to retrieve records from the server query and output them directly instead of caching all records into the memory.

-- Routines,-R

Export stored procedures and user-defined functions.

-- Single-transaction

This option submits a begin SQL statement before exporting data. BEGIN does not block any applications and ensures database consistency during export. It is only applicable to transaction tables, such as InnoDB and BDB.

This option and the -- lock-tables option are mutually exclusive, because lock tables will implicitly commit any pending transactions.

To export a large table, use the -- quick option.

-- Triggers

Export the trigger at the same time. This option is enabled by default. use -- skip-triggers to disable it.

For details about other parameters, see the manual. I usually use the following SQL to back up the MyISAM table:

/Usr/local/mysql/bin/mysqldump-uyejr-pyejr -- default-character-set = utf8 -- opt -- extended-insert = false/

-- Triggers-R -- hex-blob-x db_name> db_name. SQL

Use the following SQL to back up the Innodb table:

/Usr/local/mysql/bin/mysqldump-uyejr-pyejr -- default-character-set = utf8 -- opt -- extended-insert = false/

-- Triggers-R -- hex-blob -- single-transaction db_name> db_name. SQL

1.2 restore

The file backed up with mysqldump is an SQL script that can be directly imported. There are two ways to import data.

Directly use the mysql client

For example:

/Usr/local/mysql/bin/mysql-uyejr-pyejr db_name <db_name. SQL

Use SOURCE syntax

In fact, this is not a standard SQL syntax, but a function provided by the mysql client, for example:

SOURCE/tmp/db_name. SQL;

Here, you must specify the absolute path of the file and the file must be read by the mysqld running user (such as nobody.

2. mysqlhotcopy

2.1 Backup

Mysqlhotcopy is a PERL program originally written by Tim Bunce. It uses lock tables, flush tables, and cp or scp to quickly back up databases. It is the fastest way to back up a database or a single table, but it can only run on the machine where the database file (including the data table definition file, data file, and index file) is located. Mysqlhotcopy can only be used to back up MyISAM and can only run on Unix-like and NetWare systems.

Mysqlhotcopy supports copying multiple databases at a time and regular expression. The following are examples:

Root #/usr/local/mysql/bin/mysqlhotcopy-h = localhost-u = yejr-p = yejr db_name/tmp (copy the Database Directory db_name to/tmp)

Root #/usr/local/mysql/bin/mysqlhotcopy-h = localhost-u = yejr-p = yejr db_name_1... db_name_n/tmp

Root #/usr/local/mysql/bin/mysqlhotcopy-h = localhost-u = yejr-p = yejr db_name./regex/tmp

For more detailed usage, refer to the manual or call the following command to view the help of mysqlhotcopy:

Perldoc/usr/local/mysql/bin/mysqlhotcopy

Note: To use mysqlhotcopy, you must have the SELECT and RELOAD permissions (to execute flush tables) and be able to read the datadir/db_name directory.

2.2 restore

Mysqlhotcopy backs up the entire database directory, which can be copied directly to the datadir (/usr/local/mysql/data/) directory specified by mysqld during use, pay attention to permission issues as follows:

Root # cp-rf db_name/usr/local/mysql/data/

Root # chown-R nobody: nobody/usr/local/mysql/data/(change the owner of the db_name directory to the mysqld running user)

3. SQL syntax backup

3.1 Backup

The backup table syntax works almost the same way as mysqlhotcopy. it locks tables and copies data files. It can achieve online backup, but the effect is not ideal, so it is not recommended. It only copies table structure files and data files, and does not copy index files at the same time, so recovery is slow.

Example:

Back table tbl_name TO '/tmp/db_name /';

Note: You must have the FILE permission to execute this SQL statement, and the directory/tmp/db_name/must be writable by the mysqld user. the exported FILE cannot overwrite the existing FILE, to avoid security issues.

Select into outfile is to export data to a common text file. you can customize the field interval to process the data conveniently.

Example:

Select into outfile '/tmp/db_name/tbl_name.txt' FROM tbl_name;

Note: You must have the FILE permission to execute this SQL statement, and the FILE/tmp/db_name/tbl_name.txt must be writable by the mysqld user. the exported FILE cannot overwrite the existing FILE, to avoid security issues.

3.2 Recovery

To RESTORE a data TABLE, run the restore table statement.

Example:

Restore table from '/tmp/db_name /';

The permission requirements are similar to those described above.

If you use the select into outfile method to back up a file, you can run the load data infile statement to restore the DATA table.

Example:

Load data infile '/tmp/db_name/tbl_name.txt' into table tbl_name;

The permission requirements are similar to those described above. Before you import data, the data table already exists. If you are worried that data will repeat, you can add the REPLACE keyword to REPLACE existing records or use the IGNORE keyword to IGNORE them.

4. enable binlog)

The binlog method is more flexible and labor-saving, and supports incremental backup.

Mysqld must be restarted when binlog is enabled. First, close mysqld, open my. cnf, and add the following lines:

Server-id = 1

Log-bin = binlog

Log-bin-index = binlog. index

Start mysqld. Binlog.000001 and binlog. index will be generated during the running process. The previous file is the update operation of mysqld to record all data, and the subsequent file is the index of all binlogs, which cannot be easily deleted. For information about binlog, see the manual.

When you need to back up data, you can execute an SQL statement to stop mysqld from writing data to the current binlog, and then back up the file directly. this will achieve the purpose of incremental backup:

Flush logs;

For slave servers in the backup replication system, master.info and relay-log.info files should also be backed up.

The binlog file backed up can be viewed using mysqlbinlog, a tool provided by MySQL, for example:

/Usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001

This tool allows you to display all the SQL statements in the specified database, and you can also limit the time range, which is quite convenient. for details, please refer to the manual.

You can use a statement similar to the following to restore data:

/Usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001 | mysql-uyejr-pyejr db_name

Execute the SQL statement output by mysqlbinlog directly as the input.

If you have an idle machine, use this method for backup. Because the performance requirements of the server load balancer are not that high, the cost is low. with low costs, incremental backup can be achieved and the data query pressure can be shared. why not?

5. back up data files directly

Compared with the previous methods, backing up data files is the most direct, fast, and convenient. The disadvantage is that incremental backup is basically not supported. To ensure data consistency, run the following SQL statement before the back-to-back File:

Flush tables with read lock;

That is, the data in the memory is refreshed to the disk, and the data table is locked to ensure that no new data is written during the copy process. In this way, the backup data recovery is also very simple, just copy it back to the original database directory.

Note: For an Innodb table, you also need to back up its log file, that is, the ib_logfile * file. Because when the Innodb table is corrupted, these log files can be recovered.

6. backup policy

For systems with medium business volume, the backup policy can be set as follows: the first full backup, one incremental backup every day, and one full backup every week. For important and busy systems, full backup may be required once a day, incremental backup once an hour, or even more frequently. To achieve online backup and incremental backup without affecting online services, the best way is to use the master-slave replication mechanism (replication) to back up data on the slave machine.

7. data maintenance and disaster recovery

As a DBA (I am not yet, haha), one of the most important tasks is to ensure the security, stability, and high-speed use of data tables. Therefore, you must maintain your data tables on a regular basis. The following SQL statements are useful:

Check table or repair table to CHECK or maintain the MyISAM TABLE

OPTIMIZE the MyISAM TABLE

ANALYZE the MyISAM TABLE

Of course, the above commands can be started with the myisamchk tool, which is not described here.

In the Innodb table, you can execute the following statements to sort the shards and increase the index speed:

Alter table tbl_name ENGINE = Innodb;

This is actually a NULL operation. on the surface, nothing is done. In fact, fragments are reorganized.

In general, the MyISAM table can be completed using the restoration method mentioned above. If the index is broken, you can use myisamchk to recreate the index. For the Innodb table, it is not so straightforward, because it stores all the tables in a tablespace. However, Innodb has a check mechanism called fuzzy checkpoint. Once the log file is saved, the error can be fixed based on the log file. You can add the following parameters to the my. cnf file to enable mysqld to automatically check log files at startup:

Innodb_force_recovery = 4

This article is from the "technology" blog

BitsCN.com

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.