MySQL database InnoDB data recovery tool usage summary _ MySQL

Source: Internet
Author: User
Tags perl script
Summary of use of MySQL database InnoDB data recovery tool bitsCN.com This article introduces an open-source MySQL database InnoDB data recovery tool: innodb-tools, it extracts table row records from raw data files to restore data from lost or destroyed MySQL tables. For example, if you accidentally execute drop table, truncate table, or drop database, you can restore data in the following ways:

For most of the following content, refer to Percona Data Recovery Tool for InnoDB. the document is in English and is obscure. here is a summary of my practical experience for your reference.

Before introducing the innodb-tools tool for data recovery, we should clarify the following points:

1. this tool can only be used for InnoDB/XtraDB tables, but cannot restore the MyISAM table (note: Percona claims to have a set of tools for restoring the MyISAM table, but I have not tried it ).

2. this tool is used to restore the saved MySQL data file without running the MySQL Server.

3. data cannot be recovered. For example, the overwritten data cannot be recovered. in this case, it may need to be restored in a system or physical way, not in the scope of this tool.

4. the best time to recover is to back up MySQL data files as soon as possible when you find data loss.

5. using this tool requires manual work, which is not fully automated.

6. the recovery process depends on your understanding of data loss. you may need to select data of different versions during the recovery process. The more you know your data, the more likely it is to be restored.

Next, we will use an example to describe how to use this tool for restoration.

1. prerequisites
First, we need to understand that the innodb-tools tool is not used to connect to an online database for data recovery, but to copy data offline. Note: do not directly copy the InnoDB file when MySQL is running. this is insecure and will affect the data recovery process.

To recover data, you must know the structure of the table to be restored (column name and data type ). The simplest method is show create table. of course, several alternative methods will be introduced later. Therefore, if there is a MySQL server as a backup, even if the data is very early or even not recorded in the table, it can help to use innodb-tools for recovery. But this is not necessary.


2. Simple example
Mysql> truncate table customer;
3. build tools
To build the innodb-tools tool, it depends on the C compiler, make tool, and so on.
1. download and decompress the innodb-tools tool source code:
Wget https://launchpad.net/percona-data-recovery-tool-for-innodb/trunk/release-0.5/+download/percona-data-recovery-tool-for-innodb-0.5.tar.gz
Tar-zxvf percona-data-recovery-tool-for-innodb-0.5.tar.gz

2. go to the mysql-source directory under the root directory after decompression and run the configuration command (note: Do not run the make command ):
Cd percona-data-recovery-tool-for-innodb-0.5/mysql-source
./Configure

3. after completing the configuration steps, return to the extracted root directory and run the make command to compile and generate page_parser and constraints_parser tools:
Cd ..
Make

The page_parser tool parses the page and row structure of the table based on the underlying implementation principle of InnoDB. The constraints_parser tool is not used for the time being. you need to recompile the table structure and generate it later.
If a problem occurs during compilation, click here. There are no problems in this article, so we will not list them one by one.
4. extract the required page
The default InnoDB page size is 16 kB. each page belongs to a specific index in a specific table. The page_parser tool reads data files and copies each page to a separate file based on the index ID in the page header.
If your MySQL server is configured as innodb_file_per_table = 1, the system has helped you implement the above process. All required pages are in the. ibd file, and you usually do not need to split it. However, if the. ibd file may contain multiple indexes, it is necessary to split the pages separately. If innodb_file_per_table is not configured for MySQL server, data is stored in a global table namespace (usually a file named ibdata1, which is the case in this article ), in this case, you need to split the file by page.
4.1 paging
Run page_parser to split:
If MySQL is a version earlier than MySQL 5.0 and InnoDB adopts the REDUNDANT format, run the following command:
./Page_parser-4-f/path/to/ibdata1
If MySQL is MySQL 5.0 and InnoDB adopts the COMPACT format, run the following command:
./Page_parser-5-f/path/to/ibdata1
After running, the page_parser tool will create a pages- , Where TIMESTAMP is the UNIX system TIMESTAMP. In this directory, create a sub-directory for each index ID with the index ID of the page. For example:
Pages-1330842944/FIL_PAGE_INDEX/0-1/1-00000008. page
Pages-1330842944/FIL_PAGE_INDEX/0-1/6-00000008. page

4.2 select the desired Index ID.
In general, we need to restore the table based on the PRIMARY key (PRIMARY index). The PRIMARY key contains all rows. The following are some implementation steps:
If the database is still running and the table is not dropped, you can start InnoDB Tablespace Monitor and output all the tables, indexes, and index IDs to the error log files of MySQL server. Create an innodb_table_monitor table to collect the storage method of the innodb storage engine table and its indexes:
Mysql> create table innodb_table_monitor (id int) ENGINE = InnoDB;
If innodb_table_monitor already exists, drop the table and create the table again. After MySQL error log output, you can drop this table to stop printing and output more monitoring data. An example of output is as follows:
TABLE: name sakila/customer, id 0 142, columns 13, indexes 4, appr. rows 0
COLUMNS: customer_id: DATA_INT len 2 prec 0; store_id: DATA_INT len 1 prec 0; first_name: type 12 len 135 prec 0; last_name: type 12 len 135 prec 0; email:
Type 12 len 150 prec 0; address_id: DATA_INT len 2 prec 0; active: DATA_INT len 1 prec 0; create_date: DATA_INT len 8 prec 0; last_update: DATA_INT len 4 pr
Ec 0; DB_ROW_ID: DATA_SYS prtype 256 len 6 prec 0; DB_TRX_ID: DATA_SYS prtype 257 len 6 prec 0; DB_ROLL_PTR: DATA_SYS prtype 258 len 7 prec 0;
INDEX: name PRIMARY, id 0 286, fields 1/11, type 3
Root page 50, appr. key vals 0, leaf pages 1, size pages 1
FIELDS: customer_id DB_TRX_ID DB_ROLL_PTR store_id first_name last_name email address_id active create_date last_update
INDEX: name idx_fk_store_id, id 0 287, fields 1/2, type 0
Root page 56, appr. key vals 0, leaf pages 1, size pages 1
FIELDS: store_id customer_id
INDEX: name idx_fk_address_id, id 0 288, fields 1/2, type 0
Root page 63, appr. key vals 0, leaf pages 1, size pages 1
FIELDS: address_id customer_id
INDEX: name idx_last_name, id 0 289, fields 1/2, type 0
Root page 1493, appr. key vals 0, leaf pages 1, size pages 1
FIELDS: last_name customer_id

Here, we restored the customer table in the sakila database, from which we can obtain its primary key information:
INDEX: name PRIMARY, id 0 286, fields 1/11, type 3
The Index ID is 0 256, so the InnoDB page to be restored is located in the 0-256 subdirectory.
Note: refer to the original document to describe the method used to obtain the index ID of a table. in this document, we use a simpler method in actual operations, that is, all InnoDB pages generated by page_parser are restored directly. Practice has proved that this method is also feasible :)
5. generate table definitions
In step 4, we have found the required data. Next we need to find the table structure, create a table definition, and compile it into constraints_parser, use this tool to extract rows from the table from the InnoDB page.
Table definitions include columns, column sequence, and data types in the table. If the MySQL server is still running and the TABLE is not dropped, show create table can collect the information. Next, we will use the table structure information to create a table definition with the C structure identifier, and compile it into the constraints_parser tool. The definition of the C struct is stored in include/table_defs.h.
The simplest method is the create_defs.pl Perl script, which connects to the MySQL server, reads the results of show create table, and outputs the generated TABLE definition to the standard output. The following is an example, in which the result is redirected to include/table_defs.h:

If possible, the easiest way to create the table definition is with the create_defs.pl Perl script. it connects to the MySQL server and reads show create table output, and prints the generated definition to its standard output. here is an example:
$./Create_defs.pl -- host = localhost -- user = root -- password = 123456 -- db = sakila -- table = customer> include/table_defs.h
The table structure in the example is as follows:
Create table 'customer '(
'Customer _ id' smallint (5) unsigned not null AUTO_INCREMENT,
'Store _ id' tinyint (3) unsigned not null,
'First _ name' varchar (45) not null,
'Last _ name' varchar (45) not null,
'Email 'varchar (50) default null,
'Address _ id' smallint (5) unsigned not null,
'Active' tinyint (1) not null default '1 ',
'Create _ date' datetime not null,
'Last _ update' timestamp not null default CURRENT_TIMESTAMP ON update CURRENT_TIMESTAMP,
Primary key ('customer _ id '),
KEY 'idx _ fk_store_id '('store _ id '),
KEY 'idx _ fk_address_id '('address _ id '),
KEY 'idx _ last_name '('last _ name '),
CONSTRAINT 'fk _ mermer_address 'foreign key ('address _ id') REFERENCES 'address' ('address _ id') on update cascade,
CONSTRAINT 'fk _ mermer_store 'foreign key ('store _ id') REFERENCES 'store' ('store _ id') ON UPDATE CASCADE
) ENGINE = InnoDB default charset = utf8

The following table definitions are generated:
# Ifndef table_defs_h
# Define table_defs_h
// Table definitions
Table_def_t table_definitions [] = {
{
Name: "customer ",
{
{/* Smallint (5) unsigned */
Name: "customer_id ",
Type: FT_UINT,
Fixed_length: 2,
Has_limits: TRUE,
Limits :{
Can_be_null: FALSE,
Uint_min_val: 0,
Uint_max_val: 65535
},
Can_be_null: FALSE
},
{/* Innodb's internally used field */
Name: "DB_TRX_ID ",
Type: FT_INTERNAL,
Fixed_length: 6,
Can_be_null: FALSE
},
{/* Innodb's internally used field */
Name: "DB_ROLL_PTR ",
Type: FT_INTERNAL,
Fixed_length: 7,
Can_be_null: FALSE
},
{/* Tinyint (3) unsigned */
Name: "store_id ",
Type: FT_UINT,
Fixed_length: 1,
Has_limits: TRUE,
Limits :{
Can_be_null: FALSE,
Uint_min_val: 0,
Uint_max_val: 255
},
Can_be_null: FALSE
},
{/* Varchar (45 )*/
Name: "first_name ",
Type: FT_CHAR,
Min_length: 0,
Max_length: 45,
Has_limits: TRUE,
Limits :{
Can_be_null: FALSE,
Char_min_len: 0,
Char_max_len: 45,
Char_ascii_only: TRUE
},
Can_be_null: FALSE
},
{/* Varchar (45 )*/
Name: "last_name ",
Type: FT_CHAR,
Min_length: 0,
Max_length: 45,
Has_limits: TRUE,
Limits :{
Can_be_null: FALSE,
Char_min_len: 0,
Char_max_len: 45,
Char_ascii_only: TRUE
},
Can_be_null: FALSE
},
{/* Varchar (50 )*/
Name: "email ",
Type: FT_CHAR,
Min_length: 0,
Max_length: 50,
Has_limits: TRUE,
Limits :{
Can_be_null: TRUE,
Char_min_len: 0,
Char_max_len: 50,
Char_ascii_only: TRUE
},
Can_be_null: TRUE
},
{/* Smallint (5) unsigned */
Name: "address_id ",
Type: FT_UINT,
Fixed_length: 2,
Has_limits: TRUE,
Limits :{
Can_be_null: FALSE,
Uint_min_val: 0,
Uint_max_val: 65535
},
Can_be_null: FALSE
},
{/* Tinyint (1 )*/
Name: "active ",
Type: FT_INT,
Fixed_length: 1,
Can_be_null: FALSE
},
{/* Datetime */
Name: "create_date ",
Type: FT_DATETIME,
Fixed_length: 8,
Can_be_null: FALSE
},
{/* Timestamp */
Name: "last_update ",
Type: FT_UINT,
Fixed_length: 4,
Can_be_null: FALSE
},
{Type: FT_NONE}
}
},
};
# Endif


If necessary, edit and modify include/table_defs.h as needed. then, recompile the constraints_parser tool based on include/table_defs.h:
$ Make
Gcc-DHAVE_OFFSET64_T-D_FILE_OFFSET_BITS = 64-D_LARGEFILE64_SOURCE = 1-D_LARGEFILE_SOURCE = 1-g-I include-I mysql-source/innobase/include-c tables_dict.c- o lib/tables_dict.o
Gcc-connector-sources = 64-D_LARGEFILE64_SOURCE = 1-D_LARGEFILE_SOURCE = 1-g-I include-I mysql-source/innobase/include-o constraints_parser constraints_parser.c lib/tables_dict.o lib/print_data.o lib/check_data.o lib/libut. a lib/libmystrings. a
Gcc-embedded-D_FILE_OFFSET_BITS = 64-D_LARGEFILE64_SOURCE = 1-D_LARGEFILE_SOURCE = 1-g-I include-I mysql-source/innobase/include-o page_parser page_parser.c lib/tables_dict.o lib/libut. a

6. extract row records from the page
6.1 merge pages into a file
As mentioned above, we need to restore the index ID 0 286. the page containing data is located in the pages-1246363747/0-286/directory.
Total 120
-Rw-r -- 1 root 16384 Jun 30 05:09 1254-20171254.page
-Rw-r -- 1 root 16384 Jun 30 05:09 1255-00001255.page
-Rw-r -- 1 root 16384 Jun 30 05:09 1256-00001256.page
-Rw-r -- 1 root 16384 Jun 30 05:09 1257-00001257.page
-Rw-r -- 1 root 16384 Jun 30 05:09 50-00000050.page
-Rw-r -- 1 root 16384 Jun 30 05:09 74-00000050.page

Enter the following command to merge the pages:
$ Find pages-1246363747/0-286/-type f-name '*. page' | sort-n | xargs cat> pages-1246363747/0-286/customer_pages_concatenated
The generated result File: pages-1246363747/0-286/customer_pages_concatenated, which is used as the input of the constraints_parser tool.
6.2 run the constraints_parser tool
The following is the core step for data recovery-run the constraints_parser tool to extract Row Records. Like the page_parser tool, you need to specify the InnoDB page format (COMPACT/REDUNDANT) through the-5 OR-4 parameters, and-f specifies the input file.
Back to the example, we can run the constraints_parser tool in this way (the following command restores a single page or directly restores the files after merging all pages in Step 6.1 ):
$./Constraints_parser-5-f pages-1246363747/0-286/50-00000050. page
Each row in the output result contains the table name and columns in the table. Note: There may be correct row records or incorrect row records. This section describes how to adjust table definitions to obtain as much valid data as possible and filter out junk rows.
Customer 0 120 "32770 0" 0000-00-00 00: 12: 80 "0
Customer 0 0 "0 0" 9120-22-48 "2
Customer 61953 0 "2816 0" 7952-32-67 11:43:49 "0
Customer 0 0 "0 0" 0000-00-00 00:00:00 "0
... Snip...
Customer 0 0 "0 0" 0000-00-00 00:00:00 "16777728
Customer 28262 114 "NULL 25965 117" 4603-91-96 76: 21: 28 "5111809
Customer 0 82 "22867 77" 2775-94-58 03:19:18 "1397573972
Customer 2 1 "PATRICIA" "JOHNSON" "PATRICIA.JOHNSON@sakilacustomer.org" 6 1 "22:04:36" 1140008240
Customer 3 1 "LINDA" "WILLIAMS" "LINDA.WILLIAMS@sakilacustomer.org" 7 1 "22:04:36" 1140008240
Customer 4 2 "BARBARA" "JONES" BARBARA.JONES@sakilacustomer.org "8 1" 22:04:36 "1140008240
Customer 5 1 "ELIZABETH" "BROWN" "ELIZABETH.BROWN@sakilacustomer.org" 9 1 "22:04:36" 1140008240
Customer 6 2 "JENNIFER" "DAVIS" "JENNIFER.DAVIS@sakilacustomer.org" 10 1 "22:04:36" 1140008240
Customer 7 1 "MARIA" "MILLER" "MARIA.MILLER@sakilacustomer.org" 11 1 "22:04:36" 1140008240
Customer 8 2 "SUSAN" "WILSON" "SUSAN.WILSON@sakilacustomer.org" 12 1 "22:04:36" 1140008240
Customer 9 2 "MARGARET" "MOORE" "MARGARET.MOORE@sakilacustomer.org" 13 1 "22:04:36" 1140008240
... Snip...
Customer 0 0 "0 0" 0000-00-00 00:00:00 "0
Customer 0 0 "0 0" 7679-35-98 86: 44: 53 "720578985

7. import data to the database
Finally, to complete DATA recovery, you need to import the output result of the constraints_parser tool in step 6 to the database using the load data infile command. The command is as follows:
Load data infile '/tmp/customer_data.tsv'
Replace into table customer
Fields terminated by '/t'
Optionally enclosed '"'
Lines starting by 'customer/t'
(Customer_id, store_id, first_name, last_name, email,
Address_id, active, create_date, @ last_update)
SET last_update = FROM_UNIXTIME (@ last_update );

So far, the data recovery and import process have been completed. I hope you will not have the opportunity to practice the methods described in this article.


From those things in the big circle

BitsCN.com

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.