MySQL Backup and recovery

Source: Internet
Author: User
Tags bz2 db2 file copy table definition mysql backup

Directory

Classification of backups

Physical Backup and logical backup

Hot and cold backup

Full Backup and Incremental backup

More content

Full backup and Recovery

Import and export databases in SQL statement format

Export data in SQL statement format

Import a file in the form of an SQL statement

Import and export databases in delimiter format

Export a database in delimiter format

Import delimiter-formatted files

Mysqldump Tips

Copy the contents of one database to another database

Copy a database to another host

Export a program that has already been stored

Export the definition and data contents of a database table separately

Using mysqldump to test the compatibility of upgrade databases

Incremental backup and point-in-time recovery

Operation log File

Use point-in-time to recover data

Using incident points to recover data

Categorized physical and logical backups of backups

Physical backups are made up of the original database files, which are suitable for large and important databases because they can be quickly recovered if a problem occurs in the database.

A logical backup is a logical statement of a database, such as a statement such as CREATE Database,create Table,insert. This type of backup applies to a small amount of data, especially if you can change the table structure.

Physical backups have the following characteristics:

1. Backup files are made up of directories and files of the database and are completely a replica of the MySQL data directory

2. Physical backups are faster than logical backups because physical backups simply copy files without a logical conversion

3. More concise and compact output

4. Because backup speed and simplicity are important for important, busy databases, MySQL's Enterprise Edition uses physical backups

5. Backed-up files are only available for computers of the same architecture

6. According to the database engine to determine the granularity of the backup, but generally can not be subdivided into a backup of a table, InnoDB can be subdivided into a separate file

7. In addition to backing up the database, you can also back up related files such as logs and configuration files

8. A physical backup can be performed while the MySQL server is down, but the server needs to be locked during the server run in order to prevent the data from being written during the backup

9. Backup tool includes Mysqlbackup (MySQL Enterprise Edition proprietary), File Copy command (CP, SCP, tar, rsync) or mysqlhotcopy (only for MyISAM tables)

A logical backup has the following characteristics:

1. Complete the logical backup by querying the MySQL server for the database structure and content

2. Logical backups are slower than physical backups because the server wants to access and transform the database information into a logical statement form

3. The backup file is larger than the logical backup file

4. Backup granularity includes server level (all databases), database level (all tables for a database) or a table

5. Backup does not include database-independent files, such as logs and configuration files

6. backup files for all computer architectures

7. Logical backup only when the MySQL server is running

8. Logical backup tools include Mysqldump and select ... into outfile statements

Hot and cold backup

Warm (Hot) backup refers to the process of running a MySQL server backup, cold backup refers to the backup after stopping the MySQL server, and warm (warm) backup refers to the backup of the hand to the running MySQL server lock, prevent database file changes

Features of hot backup:

1. No serious interference (intrusive) to other clients, because other clients may access the database when performing a backup

2. Be sure to remember to lock the server, or change the database during backup will affect the integrity of the backup, MySQL Enterprise Edition will be automatically locked during backup

Features of Cold backup:

1. During backup, the client cannot access the database, so it is backed up on the slave server (this is the slave server configured with master

2. Backup program is relatively simple

Full Backup and Incremental backup

A full backup backs up all data from a MySQL server at a point in time, while an incremental backup backs up data that has changed after a specified point in time. An incremental backup is performed by the MySQL server's binary log (the log that records data changes).

Backup schedules are used to automate backup programs, and compressed backup files can reduce the size of backup files, and encrypted backups provide greater security. But MySQL itself does not provide these features and can be MySQL enterprise or third-party solutions.

Full backup and Recovery

This section describes how to use the mysqldump command to generate an export file, and how to import these export files.

The following illustration shows the contents of the database and database used by the following example

Mysqldump produces two types of output files according to the--TAB option:

1. Without the--tab option, the mysqldump command writes the standard SQL statement to the output file, which includes the CREATE statement that creates the exported object (database, table, and so on), and the INSERT statement writes data to and from the table. This file can be used to recreate the exported object using the MySQL command, which allows you to control the SQL statement format and an exported object through the options. (Refer to "Importing and exporting databases in SQL statement format" below)

Is the file exported in SQL format

2. Using the--tab command, mysqldump produces two output files for each table in the database, and the MySQL server writes a file with Tab as a delimiter (the file name is Tbl_name.txt), and the server creates a name of Tbl_ Name.sql file that contains only the build table statement (see "Import Export Database in delimiter format" below)

Is the file exported as a delimiter

Export data in SQL statement format

By default, Mysqldump is written to the standard output file as an SQL statement, typically as follows (remember to open the MySQL database first):

Shell > mysqldump [arguments] > file_name

Note: When you actually use the mysqldump command in the shell, you need to explicitly add the MySQL connection information, and all of the following commands omit this information for brevity.

Mysqldump can be used to help you enter commands directly in the shell terminal (annex I)

[[Email protected] ~] # mysqldump--help

Here are a few common commands:

In order to export all databases, you need to use the--all-databases option

Shell > mysqldump--all-databases > Dump.sql

Demo command: This is the case when you actually use this statement in the shell (named in date format,

-uroot refers to the login as root,-P ' 123456 ' refers to the password, note that there can be no space, the following command will no longer one by one demonstration:

[[Email protected] ~] # mysqldump--all-databases-uroot-p ' 123456 ' >/server/backup/bak_$ (Date +%f). SQL

In order to export the specified database (one or more databases), you need to use the--databases option (or-B)

Shell > Mysqldump--databases db1 DB2 db3 > Dump.sql

When exporting a table of the specified database, the--databases option is ignored (table1 and table2 in the DB1 database)

Shell > mysqldump db1 table1 table2> dump.sql

When exporting a single specified database, you can do this in addition to using--databases as above:

Shell > mysqldump db1 > Dump.sql

However, after ignoring the--databases option, you need to be aware of this:

There is no CREATE DATABASE and use statement in the output file, so when you re-import the export file, you must specify a default database (or create a database yourself); At the same time, you can also import this export file to another database

Import a file in the form of an SQL statement

When exporting a file using the Mysqldump statement above, use the--all-databases or

--databases option, the file already contains the CREATE database and use statement, so you do not need to specify a database, you can read directly into the file

Shell > MySQL < dump.sql

Demo statement:

[[Email protected] ~] # mysql-uroot-p ' 123456 ' </server/backup/dump.sql

Or, first log in to MySQL and import it in MySQL

MySQL > source/server/backup/dump.sql;

However, if the export file does not contain the CREATE DATABASE and use statements, the data is created manually and then read into the file.

Shell > Mysqladmin Create DB1

Demo statement:

[[Email protected] backup] # mysqladmin-uroot-p ' 123456 ' Create db_test;

Then specify the database name when importing the file:

Shell > MySQL db1 < dump.sql;

or log in to MySQL first and then import the file

If>> Source dump.sql;

Export a database in delimiter format

When using the mysqldump command, if the--tab=dir_name option is added, two files are generated in the Dir_name directory for each table in the database, txt-suffix files are used to store the data, and SQL-suffixed files are used to store the Create Table statement, the following statement is used to export the contents of the DB1 database to the/tmp directory

Shell > Mysqldump--tab=/tmp db1

Demo statement:

[[Email protected] backup] # mysqldump-uroot-p ' 123456 '--tab=/server/backup/tmp/kylin_default

Note: If you see an error as shown below

Mysqldump:got error:1: Can ' t create/write to file '/server/backup/tmp/course.txt ' (Errcode:13-permission denied) when Executing ' SELECT into OUTFILE '

Because the permissions of the/server/backup/tmp directory cause MySQL to not be able to write to the directory, the workaround is to authorize the directory

[[Email protected] backup] # chmod a+rwx tmp/

When using the--tab option, it is best to export only the database of the local MySQL server, if you want to export the remote database, then the exported directory must exist in two servers, and the TXT file will be written to the remote server directory, and the SQL file will be written to the local server directory

You can customize the export format of the TXT file by using the options:

    1. --fields-terminated-by=str to separate the contents of each column in a TXT file (tab is the default)
    2. --fields-enclosed-by=char is used in txt files to close the contents of each value (for example, enclose all values in double quotes, default is null)
    3. --fields-optionally-enclosed-by=char to enclose non-numeric content in a TXT file (default is null)
    4. --fields-escaped-by=char ignoring specific characters
    5. --lines-terminated-by=str line terminator for each row (default is line break)

For example, in the exported file, the value of each column is separated by commas and enclosed in double quotation marks, with \ r \ n (newline character in Windows environment) as the Terminator at the end of the line

[[email protected] tmp] # mysqldump-uroot-p ' 123456 '--tab=/server/backup/tmp/--fields-terminated-by=,--fields-enclosed-by= ' "'-- lines-terminated-by=0x0d0a Kylin_default

The output format has the preceding diagram changed to the following

Import delimiter-formatted files

Import the SQL suffix file before importing the TXT file

Shell >> Mysqlimport db1 t1.txt

Demo statement:

[[email protected] tmp] # mysql-uroot-p ' 123456 ' Kylin_default < Test.sql [root@localhost tmp] # mysqlimport-uroot-p ' 123456 ' kylin_default/server/backup/tmp/test.txt;

or log in to MySQL after operation

MySQL >> Load Data infile ' t1.txt ' into table T1;

Note: If you use a custom format when you export the database, you must specify the same format when you import the file to the database, or an error will occur

Such as:

Shell > Mysqlimport--fields-terminated-by=,--fields-enclosed-by= '"'--lines-terminated-by=0x0d0a DB1 T1.txt

Or

MySQL >> load Data infile ' t1.txt ' into table T1"  ' > Lines terminated by ' \r\b '; /c6>

Mysqldump techniques to copy the contents of a database to another database
Shell > Mysqldump db1 >>> MySQL DB2 < dump.sql

Using the mysqldump command without the--databases option causes the exported SQL file to have no use DB1 statement, so that the file can be imported into another database of different names

Copy a database to another host

Execute on server 1:

Shell > mysqldump--databases db1 > Dump.sql

The Dump.sql file is then copied to server 2 and executed on server 2:

Shell > MySQL < dump.sql

Adding the--databases option when using the mysqldump command causes the exported SQL file to contain the CREATE DATABASE and use DB1 statements, so you can import the file directly on a server that does not have DB1 (because DB1 is automatically generated)

Of course, you can also not use the--databases option, as long as you manually create a database on server 2 and then import the file to that database

Export a program that has already been stored

Mysqldump can also handle stored programs (stored programs) (stored processes, functions, triggers and events):

    1. --events:dump Event Scheduler Events
    2. --routines:dump stored procedures and functions
    3. --triggers:dump triggers for tables

The--triggers option is performed by default, while the remaining two options are added manually, and if you want to explicitly prohibit the export of these three programs, use the option:--skip-events,--skip-routines,--skip-triggers

Export the definition and data contents of a database table separately

Use the--no-data option to tell mysqldump to export only the table definition statements for the database without exporting the data content

Shell > mysqldump--no-data Test > Dump-defs.sql

Demo statement:

[[Email protected] backup] # mysqldump-uroot-p ' 123456 '--no-data kylin_default >/server/backup/no_data.sql

You can see that the file contains only the statements that create the table

Use the--no-create-info option to tell mysqldump to export only data manipulation statements for the database without exporting the database's table definition

Shell > mysqldump--no-create-info Test > Dump-data.sql

Demo statement:

[[Email protected] backup] # mysqldump-uroot-p ' 123456 '--no-create-info kylin_default >/server/backup/no_create_into.sql

You can see that the file contains only the contents of the action data (here is just the insert data)

Using mysqldump to test the compatibility of upgrade databases

When you plan to upgrade your MySQL server, you should first test the new version of the database. You can import the data from the old server to the new server and test whether the new version of the server will handle the data correctly.

Execute on the old server:

Shell > mysqldump--all-databases--no-data--routines--events > Dump-defs.sql

Executing in a new server

Shell > MySQL < dump-defs.sql

Because there is no data in the exported file, it can be executed quickly and you should be aware of any warnings and errors that occur during execution

When you have confirmed that the database has been generated correctly on the new server, you can import the data to the new server

Execute on the old server:

Shell > mysqldump--all-databases--no-create-info > Dump-data.sql

Executing in a new server

Shell > MySQL < dump-data.sql

Then check the data content and run some test programs

Incremental backup and point-in-time recovery

Point-in-time recovery refers to specifying a point in time and then having the MySQL server perform all operations after that point in time. In general, point-in-time recovery will run after a full backup, because when the full backup is completed, the server will record the state and time (in other words, there is a A, a, a two database, B first full backup of A's database, then A's database content has changed, At this time the B database will be able to maintain the same as a as long as the statement of a data change is executed.

Operation log File

Points for Point-in-time recovery:

Point-in-time recovery requires a binary log file that is generated after the full backup is performed, so the server must first enable the--long-bin configuration option to generate the binary file. The server defaults to storing the log in the data store's directory, but you can specify another directory

Modify your my.cnf file, add content under the [MYSQD] module: log-bin=mysql-bin, and then restart the database. Then perform a full backup immediately, because the log file is the SQL statement after the full backup, if you do not import the full backup database before each import log file, it will cause an error (for example, for an INSERT statement, if you import the log file directly, you will be prompted error 1062 (23000) at Line xx:duplicate entry ' X ' for key ' PRIMARY ' because this data already exists before the log file is imported.

Therefore, to test the following files, each time you import the log file, you must first import the backed up database

Log in to MySQL and use the command

MySQL > show bianry logs;

You can see a list of binary logs (because I cleaned up all the logs, so now there's only one log)

Judging the current binary file

MySQL > Show master Status \g

The binary file is stored in the MySQL data directory, and if you view the file directly, it will show garbled characters. You want to use the Mysqlbinlog command to display the file correctly.

[[email protected] data] # Mysqlbinlog mysql-bin.000001

I intercepted the last part of the file, noticing the time indicated by the arrows above, indicating that the database executed three statements on June 30, 2015 15:47:28:

COMMIT;

DELIMITER;

ROLLBACK;

Then I log into the database, insert a piece of data, and then view this log content:

As you can see, the log file grows and generates an INSERT statement at 15:48:37, and this statement is the one I used. As a result, this binary log completely records all operations on the database.

Use the Mysqlbinlog command to execute SQL statements in a log file

Shell > Mysqlbinlog binlog_files | Mysql-u root-p

I accidentally wrote a DELETE statement in the database, and now I want to restore the state of the database before executing this statement.

First, save the contents of the log file as a readable file

[[email protected] data] # mysqlbinlog mysql-bin.000002 > Tmpfile [root@localhost data] # VI tmpfile

Then locate the statement in the file that deletes the data and delete the statement

You can see in the June 30, 2015 15:49:38 executed a DELETE statement, now delete the statement, save and exit

Remember: First import the previous backup database, and then import this file into the MySQL server

Shell > MySQL < lastest-> Mysql-uroot-p < tmpfile

You'll see that the deleted data appears.

If you are importing multiple binary logs, it is best not to import each log individually, for example, to import two logs.

Shell > Mysqlbinlog mysql-bin.000001 | Mysql-u root-> Mysqlbinlog mysql-bin.000002 | Mysql-u root-p

The above approach is risky, if the first log contains the CREATE TABLE statement, and the second log contains the USE statement, when the first log import process fails, it also causes the second log import to fail (because the build table statement failed to execute, so there is no table when importing the second log)

To avoid this error, you can import multiple logs at once

Shell > Mysqlbinlog mysql-bin.000001 mysql-bin.000002 | Mysql-u root-p

Another method is as follows

Shell > Mysqlbinlog mysql-bin.000001 >/tmp/> Mysqlbinlog mysql-bin.000002 >>/tmp/  > Mysql-uroot-p-E "source/tmp/tmpfile;"

Use the-e option to run the statement in double quotes without logging in to the database

Use point-in-time to recover data

Using a log file, you can clearly indicate which point in time the data was recovered to. As I mentioned above, I can see that a DELETE statement was executed 15:49:38 on June 30, 2015, but I did not find that the pit was discovered after I executed a lot of statements. Now, how can I save it?

1. First, I want to first import the most recent database backed up

Shell > MySQL < lastest-backup.sql

2. Export the log file to a file that can be read

Shell > Mysqlbinlog mysql-bin.000002 >/tmp/tmpfile

3. Open the file and find the point in time when the DELETE statement was executed (as shown in the previous figure)

4. Then when you import the log file, only the statements until 15:49:38 June 30, 2015 are executed

Shell > Mysqlbinlog--stop-datetime="2015-06-30 15:49:37"       MySQL- bin.000002 | Mysql-u root-p

Note that--stop-datetime to point to a previous time

5. Import the log file again, this time only after the statement from June 30, 2015 15:49:38

Shell > Mysqlbinlog--start-datetime="2015-06-30 15:50:11"       MySQL- bin.000002 | Mysql-u root-p

Note that--start-datetime to point to a later time

Attention:

    1. It would be easier to delete the deleted statement from the log file and then import it into MySQL, but here to demonstrate how to specify a point in time to recover data
    2. Specifying a point in time to recover the data is not very useful, because there may be more than one statement executed at the same time, this method is completely useless, you can refer to the following using the event point to recover data.
Using incident points to recover data

Note that at 508, all the numbers in this file that follow at is the event point, which is incremented and unique.

So, before we get back to the point of this incident,

Shell > Mysqlbinlog--stop-position=411       mysql-bin.000001 | mysql-u root-p

In the same way, move forward one point

and recover the data from this event.

Shell > Mysqlbinlog--start-position=627       mysql-bin.000001 | mysql-u root-p

Move backwards one point in time

Automated backups automate backup scripts

After you have written your backup script, you can automate the backup by adding the script to the Contrab.

1. Confirm that the Crond service is running

[[Email protected] backup] # Service Crond Status

If it does not start, use the command to start:

[[Email protected] backup] # Service Crond Start

2. Then add the script to the Contrab (automatic backup 01:00 daily)

[[Email protected] backup] # crontab-e #  backup all databases in MySQL on 01:00 everyday by Root* * */bin/sh/server/backup/backup_automatically.sh >/dev/null 2>&1

Add the contents of the above line and save the exit

Upload to FTP

We perform a backup every day, after a long time will generate a large number of files, waste disk space, so you can back up every day to automatically upload the backup files to FTP, and then delete the local disk files, you can save the local disk space.

1. I am referring to the FTP server that this article builds on Windows: http://jingyan.baidu.com/article/63f23628f04e420209ab3d70.html

2. Then install FTP on Linux

[[Email protected] backup] # Yum Install ftp-y

3. Test whether the upload is successful

 [[email protected] Backup]#   Ftp-i-N <<efo  open  192.168.1.205 #ftp IP address  user Administrator  123456 #   user name and password  CD MySQL  #   which folder is uploaded to ftp  lcd /server/backup #   The text to be uploaded The path  hashbinary put  2015-07-07.tar.bz2 #   uploaded file name  promptclosebyeefo  

Note that the binary option above is important because the format is changed from the Linux upload file to the Windows Server, and the binary option is guaranteed to not change any formatting

Download file:

[[Email protected] backup] # ftp-i-N <<efo  192.168.1.205              #FTP IP address  123456        # user name and password CD MySQL                      # The file to download is located on the FTP folder on which binary        2015-07-07.tar.bz2             # The file name to download closebyeefo

MySQL Backup and recovery

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.