Select ... into outfile backup recovery (load data) and mysqldump time comparison

Source: Internet
Author: User
Tags mul

Select ... into outfile ' path ' backup

This is a very fast recovery, much faster than insert insertion, and unlike backup-rich mysqldump, he can only back up data in a table, and cannot contain the structure of a table. If the table is drop after the backup is complete, the recovery operation cannot be implemented (unless there is a table structure).

Mysql> SELECT * from T1 to outfile '/mydata/mysql/mysql3307/data/t1.sql ';

ERROR 1290 (HY000): The MySQL server is running with the--SECURE-FILE-PRIV option so it cannot execute this statement

Mysql> Show variables like '%secure% ';

+--------------------------+-------+

| variable_name | Value |

+--------------------------+-------+

| Require_secure_transport | OFF |

| Secure_auth | On |

| Secure_file_priv | NULL |

+--------------------------+-------+

3 Rows in Set (0.00 sec)

Mysql> set secure_file_priv= '/tmp ';

ERROR 1238 (HY000): Variable ' Secure_file_priv ' is a read only Variable

Mysql> Set session secure_file_priv== '/TMP ';

Error 1064 (42000): You have a error in your SQL syntax; Check the manual, corresponds to your MySQL server version for the right syntax to use near ' = = '/tmp ' at line 1

Mysql>

Close the MySQL database and set the Secure_file_priv directory.

[Email protected] ~]# mysqladmin-s/tmp/mysql3307.sock-uroot-pmysql shutdown

Mysqladmin: [Warning] Using a password on the command line interface can be insecure.

[Email protected] ~]#

[Email protected] ~]#

[[email protected] ~]# ps-ef |grep MySQL

Root 3506 2071 0 01:24 pts/1 00:00:00 grep mysql

[Email protected] ~]#

Vi/etc/my3307.cnf

[Mysqld] Added inside

Secure_file_priv=/tmp

View after startup

Mysql> show global variables like '%secu% ';

+--------------------------+-------+

| variable_name | Value |

+--------------------------+-------+

| Require_secure_transport | OFF |

| Secure_auth | On |

| Secure_file_priv | /tmp/|

+--------------------------+-------+

3 rows in Set (0.01 sec)

SELECT * from T1 into outfile '/tmp/t1_onlydata.sql ';

Mysql> SELECT * from T1 to outfile '/tmp/t1_onlydata.sql ';

Query OK, 972864 rows Affected (1.12 sec)

mysql> desc T1;

+-------+--------+------+-----+---------+-------+

| Field | Type | Null | Key | Default | Extra |

+-------+--------+------+-----+---------+-------+

| ID | Int (4) | YES | MUL |       NULL | |

+-------+--------+------+-----+---------+-------+

1 row in Set (0.00 sec)

All are text file data.

Select ... into outfile ' path ' recovery

Clears the T1 table data and restores it

mysql> truncate TABLE t1;

Query OK, 0 rows affected (0.04 sec)

mysql> desc T1;

+-------+--------+------+-----+---------+-------+

| Field | Type | Null | Key | Default | Extra |

+-------+--------+------+-----+---------+-------+

| ID | Int (4) | YES | MUL |       NULL | |

+-------+--------+------+-----+---------+-------+

1 row in Set (0.00 sec)

LOAD Data Recovery (only data recovery)

Grammar:

LOAD DATA [Low_priority | CONCURRENT] [LOCAL] INFILE ' file_name '

[REPLACE | IGNORE]

Into TABLE tbl_name

[PARTITION (Partition_name [, Partition_name] ...)]

[CHARACTER SET Charset_name]

[{fields | COLUMNS}

[TERMINATED by ' string ']

[[optionally] enclosed by ' char ']

[Escaped by ' char ']

]

[LINES

[Starting by ' string ']

[TERMINATED by ' string ']

]

[IGNORE Number {LINES | ROWS}]

[(Col_name_or_user_var

[, Col_name_or_user_var] ...)]

[SET col_name={expr | DEFAULT},

[, col_name={expr | DEFAULT}] ...

Load data INFILE '/tmp/t1_onlydata.sql ' into TABLE test.t1;

To view table data:

Restore success.

Load data vs. Insert Speed

So, I guess I could use mysqldump to make a backup of the table structure, use SELECT ... into outfile back up the data, load data to recover it, and test the speed.

Create a table.

Delimiter//

CREATE PROCEDURE Per2 ()

Begin

declare i int;

Set I=1;

While I <= 1000000 do

INSERT INTO Test. L values (i, ' aaaaa ');

Set i=i+1;

End while;

End

//

To execute a stored procedure:

Call Per2 ();

//

View data (actually I just insert only 140,000 rows of data)

Mysqldump backing up metadata files

Mysqldump-s/tmp/mysql3307.sock-uroot-pmysql--single-transaction--set-gtid-purged=off Test l-d >/tmp/L_meta.sql

Select ... into outfile ' path ' backup

SELECT * FROM Test. L into outfile '/tmp/20180525test_ldata.sql '

Mysqldump backing up the entire table

Mysqldump-s/tmp/mysql3307.sock-uroot-pmysql--single-transaction--set-gtid-purged=off Test L >/tmp/L_table.sql

Recovery time comparison 1, mysqldump first restore table structure (or directly build the table)

First delete, restore the table structure.

Restore table structure:

The nuclear time is very fast, is the time to build a table, do not calculate the overall time has no effect.

For example I build a table:

Time to build the table is 0.03s.

2. Load Data recovery time

Load data INFILE '/tmp/20180525test_ldata.sql ' into TABLE test. L

Time is 1.59s.

3, Mysqldump This insert method restores

Time Mysql-s/tmp/mysql3307.sock-uroot-pmysql Test </tmp/l_table.sql

Nearly 2s mysqldump recovery time.

Summarize

Metadata recovery +load data time altogether

T1=0.03+1.59=1.62s

mysqldump recovery altogether spent

t2=1.99s

For 140,000 data, load data is nearly 1/4 faster than mysqldump, and for large data volumes, it should be much faster, allowing the use of metadata (or table structure) to recover a single table with load data.

Select ... into outfile backup recovery (load data) and mysqldump time comparison

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.