Logical Backup, Mysqldump,select ... Into OUTFILE, recovering

Source: Internet
Author: User
Tags cpu usage

Logical Backup Mysqldump

The Mysqldump Backup tool was originally written by Igor Romanenko and is typically used to complete a backup of the dump database and porting between different databases, such as upgrading from a low-version MySQL database to a high-version MySQL database. Or migrating from a MySQL database to a database such as Oracle and SQL Server.

The syntax for mysqldump is as follows:

mysqldump [arguments] > file_name

If you want to back up all the databases, you can use the--ALL-DATABAES option:

Mysqldump--all-databases > Dump.sql

If you want to back up the specified database, you can use the--databases option:

Mysqldump--databases db1 DB2 db3 > Dump.sql

If you want to back up the test schema, you can use the following statement:

Mysqldump--single-transaction Test > Test_backup.sql

We use the--single-transaction option to ensure the consistency of the backup, the backup test_backup.sql is a text file, the command cat can view the contents of the file: Cat Test_backup.sql

As you can see, the contents of the backed up file are the table structure and data , all of which are represented by SQL statements. The comments at the beginning and end of the file are used to set the parameters of the MySQL database, which is generally used to make the restoration work more efficient and accurate. The next section is the CREATE TABLE statement, followed by the INSERT statement.

Mysqldump parameter options are many, you can see all the parameters through the mysqldump-help command, some parameters have abbreviations, such as the abbreviation of the--lock-tables-L, highlighting some of the more important parameters.

--single-transaction: Before the backup begins, execute the start transaction command to obtain the consistency of the backup, which is currently valid only for INNODB storage engines. When this parameter is enabled and backed up, ensure that there are no other DDL statement executions, because consistent reads do not isolate DDL statements.

--lock-tables (-L): In the backup, lock all tables under each schema in turn. Typically used for MyISAM storage engines, which can only be read on a database while backed up, but the backup still guarantees consistency. For InnoDB storage engines, this parameter is not required,--single-transaction can be used, and-lock-tables and-single-transaction are mutually exclusive (exclusive) and cannot be used at the same time. If your MySQL database has both MyISAM storage engine tables and InnoDB storage engine tables, then your choice is only--lock-tables. In addition, as mentioned earlier, the--lock-tables option is to lock the tables in each schema in turn, so that only the consistency of the table backups under each schema is guaranteed, not the consistency of the tables under all schemas.

--lock-all-tables (-X): Locks All tables in all schemas during the backup process. This avoids the problem that the previously mentioned--lock-tables parameter cannot lock all tables at the same time.

--add-drop-database: Run drop database before create database. This parameter needs to be used with the-all-databases or-databases option. By default, there is no create DATABASE in the exported text file, unless you specify this parameter, so you might see something like this:

Mysqldump--single-transaction--add-drop-database--databases Test > Test_backup.sql

Cat Test_backup.sql

--master-data[=value]: The backup dump file generated by this parameter is primarily used to establish a slave replication. When value is 1 o'clock, the change master statement is recorded in the dump file, and when value is 2 o'clock, the change master statement is written as a SQL comment. By default, value is empty.

When the value is 1 o'clock, you will see in the backup file:

Mysqldump--single-transaction--add-drop-database--master-data=1--databases test>test_backup.sql

Cat Test_backup.sql

Change MASTER to master_log_file= ' xen-server-bin.000006 ', master_log_pos=8095;

When value is 2 o'clock, you will see the Change Master statement commented out in the backup file:

Mysqldump--single-transaction--add-drop-database--master-data=1--databases test>test_backup.sql

Cat Test_backup.sql

--position to start replication or point-in-time recovery from

--master-data will automatically ignore the-lock-tables option. If you do not use the-single-transaction option, the-lock-all-tables option is automatically used.

--events (-E): Backup Event Scheduler.

--routines (-R): Backs up stored procedures and functions.

--triggers: Backup trigger.

--hex-blob: A binary, VARBINARY, BLOG, bit column type is backed up into a 16-in format. Mysqldump exported files are generally text files, however, if there are these types in the exported data, some characters may not be visible in the text file mode, and if you add the-HE-BLOB option, the results will be displayed in 16 binary format, such as:

Mysqldump--single-transaction--add-drop-database--master-data=2--no-autocommit--databases test3 > TEST3_BACKUP.S Ql

Cat Test3_backup.sql

LOCK TABLES ' a ' WRITE;

Set autocommit=0;

INSERT into ' a ' VALUES (0x61000000000000000000);

UNLOCK TABLES;

As you can see, the data is exported here in 0x61000000000000000000 (16 binary format).

--tab=path (-t path): Produces a tab-separated data file. For each table, Mysqldump creates a Table_name.sql file containing the CREATE TABLE statement and Tbl_name.txt that contains the data. You can use--fields-terminated-by=......,--fields-enclosed-by=......,--fields-optionally-enclosed-by=......,--fields-escaped-by= ... ...,--lines-terminated-by= ... To change the default delimiter, line break, and so on, such as:

mysqldump--single-transaction--add-drop-database--tab= "/usr/local/mysql/data/test" test

Most DBAs prefer to use SELECT ... into outfile the way to export a table, but you can do the work through mysqldump, and you can export multiple tables at once, and ensure consistency of the exported data.

--where= ' Where_condition ' (-W ' where_condition '): Exports the data for a given condition.

For example, export table A under the B schema, and table A has more than 2 data, as shown below.

mysqldump--single-transaction--where= ' b>2 ' test a > A.sql

SELECT ... Into OUTFILE

SELECT ... The INTO statement is also a logical backup method, or, more accurately, exporting data from a table.

SELECT ... The syntax of into is as follows:

SELECT [Column 1], [column 2] ...

Into

OUTFILE ' file_name '

[{fields| COLUMNS}

[TERMINATED by ' string ']

[[optionally]enclosed by ' char ']

[Escaped by ' char ']

]

[LINES

[Starting by ' string ']

[TERMINATED by ' string ']

]

From TABLE WHERE ...

The field [TERMINATED by ' string '] represents the delimiter for each column,

[[optionally]enclosed by ' char '] denotes the inclusion of a string,

[Escaped by ' char '] represents an escape character,

[Starting by ' string '] represents the start symbol for each line,

TERMINATED by ' string ' represents the end symbol for each line.

If you do not specify any fields and lines options, the following settings are used by default:

Fields TERMINATED by ' \ t '

Enclosed by '

escaped by ' \ \ '

LINES TERMINATED by ' \ n '

Starting by '

file_name represents the exported file, but the permissions on the path where the file resides must be mysql:mysql, otherwise MySQL will report no permission to export:

SELECT * into outfile '/root/a.txt ' from A;

ERROR 1 (HY000): Can ' t create/write to file '/root/a.txt ' (errcode:13)

View the table a file exported through select INTO:

SELECT * into outfile '/home/mysql/a.txt ' from A;

Quit

Cat/home/mysql/a.txt

1 A

2 b

You can see that the default exported file is split with tab, and if you want to use other delimiters, such as "," You can use the fields TERMINATED by ' string ' option, such as:

MySQL test-e "select * to OutFile '/home/mysql/a.txt ' fields terminated by ', ' from a ';

Cat/home/mysql/a.txt

1,a

2,b

Under the Windows platform, because its newline character is "\ r \ n", you may need to specify the lines TERMINATED by option when exporting, such as:

MySQL test-e "select * to OutFile '/home/mysql/a.txt ' fields terminated by ', ' lines terminated by ' \ r \ n ' from a ';

Od-c A.txt

0000000 1,a\r\n 2,b\r\n 3

0000017

Recovery of a logical backup mysqldump

Mysqldump recovery operation is relatively simple, because the backup of the file is the exported SQL statement, generally only need to execute this file, you can use the following methods:

Mysql-uroot-p123456

If you include SQL statements that create and delete databases when you export, you must ensure that there are no other database-independent files in the schema directory when you delete the schema, or you may receive the following error:

drop database test;

Error 1010 (HY000): Error dropping database (can ' t rmdir './test ', errno:39)

Because a logical backup file is made up of SQL statements, you can also execute the exported logical backup file by using the source command, as follows:

Source/home/mysql/test_backup.sql;

Query ok,0 Rows Affected (0.00 sec)

......

Query ok,0 Rows Affected (0.00 sec)

The database can be recovered through mysqldump, but one common problem is that mysqldump can export stored procedures, triggers, events, data, but cannot export views . Therefore, if you also use a view in your database, you need to export the definition of the view after you have backed up the database with mysqldump, or save the frm file for the view definition and import it on recovery to ensure a full recovery of the mysqldump database.

LOAD DATA INFILE

If the data exported through Mysqldump--tab or select into outfile needs to be restored, it is necessary to import through the load Data infile command.

The syntax for LOAD DATA infile is as follows:

LOAD DATA [low_priority| CONCURRENT] [LOCAL] INFILE ' file_name '

[replace| IGNORE]

Into TABLE tbl_name

[CHARACTER SET Charset_name]

[{fields| COLUMNS}

[TERMINATED by ' string ']

[[optionally]enclosed by ' char ']

[Escaped by ' char ']

]

[LINES

[Starting by ' string ']

[TERMINATED by ' string ']

]

[IGNORE number LINES]

[(Col_name_or_user_var,......)]

[SET col_name=expr,......]

To use load DATA INFILE for a server file, you must have file rights, where the option to import the format is exactly the same as the SELECT INTO OutFile command described earlier. The IGNORE number lines option ignores the first few rows that are imported.

Here's an example of importing a file with the load DATA infile command and ignoring the first line of imports:

Load data infile '/home/mysql/a.txt ' into table A;

In order to expedite the import of the InnoDB storage engine, you may want the import process to ignore foreign key checks, so you can use the following methods.

SET @ @foreign_key_checks = 0;

Load data infile '/home/mysql/a.txt ' into table A;

SET @ @foreign_key_checks = 1;

You can import for a specified column, such as importing data into columns A, B, and column C equals a, B list, and:

CREATE Table B (a int,b int,c int,primary key (a)) Engine=innodb;

load Data infile '/home/mysql/a.txt ' into table B fields terminated by ', ' (A, b) set c=a+b;

The LOAD Data infile command can be used to import the information, but it can also be done to monitor the Linux operating system. If you need to monitor CPU usage, you can do so by loading/proc/stat.

First we need to create a table cpu_stat that monitors the CPU, which is structured as follows:

CREATE TABLE IF not EXISTS dba.cpu_stat (

ID bigint auto_increment PRIMARY key,

Value char (+) is not NULL,

User bigint,

Nice bigint,

System bigint,

Idle bigint,

Iowait bigint,

IRQ bigint,

Softirq bigint,

Steal bigint,

Guest bigint,

Other bigint,

Time datetime

);

You can then load the/proc/stat file with the load DATA infile command, but some of these values need to be converted, as shown in the following command:

LOAD DATA INFILE '/proc/stat '
IGNORE into TABLE dba.cpu_stat
Fields TERMINATED by '
(@value, @val1, @val2, @val3, @val4, @val5, @val6, @val7, @val8, @val9, @val10)
SET
[Email protected],
User=if (@value not like ' cpu% ', null,if (@value!) = ' CPU ', ifnull (@val1, 0), ifnull (@val2, 0)),
Nice=if (@value not like ' cpu% ', null,if (@value!) = ' CPU ', ifnull (@val2, 0), ifnull (@val3, 0)),
System=if (@value not like ' cpu% ', null,if (@value!) = ' CPU ', ifnull (@val3, 0), ifnull (@val4, 0)),
Idle=if (@value not like ' cpu% ', null,if (@value!) = ' CPU ', ifnull (@val4, 0), ifnull (@val5, 0)),
Iowait=if (@value not like ' cpu% ', null,if (@value!) = ' CPU ', ifnull (@val5, 0), ifnull (@val6, 0)),
Irq=if (@value not like ' cpu% ', null,if (@value!) = ' CPU ', ifnull (@val6, 0), ifnull (@val7, 0)),
Softirq=if (@value not like ' cpu% ', null,if (@value!) = ' CPU ', ifnull (@val7, 0), ifnull (@val8, 0)),
Steal=if (@value not like ' cpu% ', null,if (@value!) = ' CPU ', ifnull (@val8, 0), ifnull (@val9, 0)),
Guest=if (@value not like ' cpu% ', null,if (@value!) = ' CPU ', ifnull (@val9, 0), ifnull (@val10, 0)),
Other=if (@value like ' cpu% ', User+nice+system+idle+iowait+irq+softirq+steal+guest, @val1),
Time=now ();

Then look at the table Cpu_stat, you can see something like this:

SELECT * FROM Cpu_stat\g

1.row***************************

Id:1

Value:cpu

user:2632896

nice:67761

system:688488

idle:4136329105

iowait:1468238

irq:0

softirq:106303

steal:148300

guest:0

other:4141441091

Time:2010-09-10 12:01:04

14.row***************************

Id:14

Value:procs_running

User:null

Nice:null

System:null

Idle:null

Iowait:null

Irq:null

Softirq:null

Steal:null

Guest:null

Other:1

Time:2010-09-10 12:01:04

15.row***************************

Id:15

Value:procs_blocked

User:null

Nice:null

System:null

Idle:null

Iowait:null

Irq:null

Softirq:null

Steal:null

Guest:null

other:0

Time:2010-09-10 12:01:04

Rows in Set (0.00 sec)

You can then set up a timer to let the MySQL database run the load DATA infile statement automatically, so that the CPU information for each point in time is logged to the table Cpu_stat.

Execute the following statement to get the CPU usage at each point in time:

Select
100* ((New.user-old.user)/(New.other-old.other)) User,
100* ((New.nice-old.nice)/(New.other-old.other)) Nice,
100* ((New.system-old.system)/(New.other-old.other)) system,
100* ((new.idle-old.idle)/(New.other-old.other)) idle,
100* ((new.iowait-old.iowait)/(New.other-old.other)) iowait,
100* ((NEW.IRQ-OLD.IRQ)/(New.other-old.other)) IRQ,
100* ((NEW.SOFTIRQ-OLD.SOFTIRQ)/(New.other-old.other)) softer,
100* ((new.steal-old.steal)/(New.other-old.other)) steal,
100* ((new.guest-old.guest)/(New.other-old.other)) Guest,
New.time
From Dba.cpu_stat old,
Dba.cpu_stat New
where new.id-15=old.id
and old.value= ' CPU '
and new.value=old.value\g;

Again, we can perform the actions shown above on the/proc/diskstat file, so that the disk can be monitored.

Mysqlimport

Mysqlimport is a command-line program provided by the MySQL database, which is essentially the command interface for load data infile, and most of the options are the same as the load data infile syntax. The syntax format is as follows:

Mysqlimport [options] db_name textfile1[textfile2 ...]

Unlike load DATA infile, the Mysqlimport command is capable of importing multiple tables and concurrently importing different files through the--user-thread parameter. concurrency here refers to the concurrent import of multiple files, does not mean that Mysqlimport can import a file concurrently , which is different, and concurrent to the same table import, the effect is generally not better than the serial way.

Two tables are imported concurrently with Mysqlimport.

Mysqlimport--use-threads=2 Test/home/mysql/t.txt/home/mysql/s.txt

If you look at the list of MySQL database threads during the run of the above command, you should see something like this:

Show Full Processlist\g

1.row***************************

id:46

User:rep

host:www.dao.com:1028

Db:null

Command:binlog Dump

time:37651

State:master have sent all binlog to slave;waiting for Binlog to be updated

Info:null

2.row***************************

Id:77

User:root

Host:localhost

Db:test

Command:query

time:0

State:null

Info:show Full Processlist

3.row***************************

id:83

User:root

Host:localhost

Db:test

Command:query

time:73

State:null

Info:load DATA INFILE '/home/mysql/t.txt ' into TABLE ' t ' IGNORE 0 LINES

4.row***************************

id:84

User:root

Host:localhost

Db:test

Command:query

time:73

State:null

Info:load DATA INFILE '/home/mysql/s.txt ' into TABLE ' s ' IGNORE 0 LINES

4 rows in Set (0.00 sec)

Mysqlimport actually executes 2 load DTA infile statements to complete the concurrent import operation.

Logical Backup, Mysqldump,select ... Into OUTFILE, recovering

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.