Mysql Database InnoDB Data Recovery Tool Usage Summary detailed _mysql

Source: Internet
Author: User
Tags compact server error log table definition percona perl script
Based on practical experience, this paper introduces an open source MySQL database innodb data recovery tool: Innodb-tools, which recovers data from lost or corrupted MySQL tables by extracting Row records from the original data file. For example, after you accidentally execute a drop table, TRUNCATE table, or drop database, you can recover the data in the following ways.


The following most of the reference from: Percona Data Recovery Tool for InnoDB, the document is in English, and write more obscure, here is a personal experience summary of actual combat, for everyone to reference learning.


before you introduce the Innodb-tools tool for data recovery, first make the following points clear:
1, this tool can only be valid for the INNODB/XTRADB table, and can not restore the MyISAM table (Note: Percona claims to have a set of tools to restore the MyISAM table, but I did not try).
2, this tool is to save the MySQL data file for recovery, and not the MySQL server to run.
3, can not guarantee that the data must always be recoverable. For example, the data being overwritten cannot be recovered, in which case it may need to be recovered in a systematic or physical way, not within the scope of this tool.
4, the best time to recover is when you find data loss, as soon as possible backup MySQL data files.
5, the use of this tool need to do some manual work, not fully automatic completion.
6. The recovery process relies on your knowledge of lost data, and you may need to choose between different versions of the data during the recovery process. So the more you know about your data, the more likely it is to recover.
Next, let's use an example to illustrate how to recover from this tool.


1. Prerequisite conditions
First, it needs to be understood that the Innodb-tools tool does not recover data by connecting to the online database, but by copying the data offline. Note: Do not copy innodb files directly when MySQL is running, this is unsafe and will affect the data recovery process.


In order to complete data recovery, you must know the table structure (column name, data type) that will be restored. The easiest way to do this is to show CREATE TABLE, which will, of course, introduce several alternative ways. Therefore, if you have a MySQL server as a backup, even if the data is very early or even there is no record in the table, you can help with the Innodb-tools tool for recovery. But this is not a necessity.


2. Simple example
Copy Code code as follows:

Mysql> TRUNCATE TABLE Customer;



3. Building Tools
To build the Innodb-tools tool, you need to rely on the C compiler, make tool, and so on.


1, download decompression innodb-tools tool Source:
Copy Code code as follows:

wget https://launchpad.net/percona-data-recovery-tool-for-innodb/trunk/release-0.5/+download/ PERCONA-DATA-RECOVERY-TOOL-FOR-INNODB-0.5.TAR.GZTAR-ZXVF percona-data-recovery-tool-for-innodb-0.5.tar.gz



2, into the decompression after the root directory Mysql-source directory, run the configuration command (note: Do not run make command):
Copy Code code as follows:

CD Percona-data-recovery-tool-for-innodb-0.5/mysql-source
./configure



3, after the completion of the configuration steps, back to the extracted root directory, run make command, build Page_parser and Constraints_parser tools:
Copy Code code as follows:

Cd..
Make



The Page_parser tool parses the page and row structure of the table according to the underlying implementation of the INNODB. The Constraints_parser tool is temporarily unused, and the subsequent need to recompile the build after the table structure is defined.


If there is a problem with the compile process, click here. There are no problems in the use of this article, so it is no longer enumerated.


4. Extract the required pages
The default size of the InnoDB page is 16K, and each page belongs to a specific index in a particular table. The Page_parser tool copies each page into a separate file by reading the data file, depending on the index ID in the page header.


If your MySQL server is configured for innodb_file_per_table=1, then the system has helped you achieve the above process. All the pages you need are in an. ibd file, and you usually don't need to cut it again. However, if an. ibd file may contain more than one index, it is necessary to separate the pages separately. If the MySQL server is not configured with innodb_file_per_table, then the data will be saved in a global table namespace (usually a file called Ibdata1, this is the case), at which point you need to slice the file by page.


4.1 Splitting page
To run the Page_parser tool for segmentation:


• If MySQL is a version prior to 5.0, INNODB takes the redundant format and runs the following command:


Copy Code code as follows:

./page_parser-4-f/path/to/ibdata1



• If MySQL is version 5.0, INNODB takes the compact format and runs the following command:


Copy Code code as follows:

./page_parser-5-f/path/to/ibdata1



After running, the Page_parser tool creates a pages-<timestamp> directory where TIMESTAMP is the UNIX system timestamp. In this directory, for each index ID, create a subdirectory with the page's index ID. For example:


Copy Code code as follows:

Pages-1330842944/fil_page_index/0-1/1-00000008.page
Pages-1330842944/fil_page_index/0-1/6-00000008.page



4.2 Select the desired index ID
In general, we need to recover from the table's primary key (PRIMARY index), which contains all the rows in the primary key. Here are some possible steps:


If the database is still running and the table is not dropped, you can start InnoDB tablespace Monitor and output all tables and Indexes,index IDs to MySQL server error log files. Create a innodb_table_monitor table to collect how the InnoDB storage engine table and its indexes are stored:


Copy Code code as follows:

Mysql> CREATE TABLE innodb_table_monitor (id int) Engine=innodb;



If the innodb_table_monitor already exists, the drop table then restarts the CREATE table. After the MySQL error log output, you can drop this table to stop printing output more monitoring. An example of an output is as follows:


Copy Code code as follows:



Table:name Sakila/customer, id 0 MB, columns, indexes 4, Appr.rows 0


COLUMNS:customer_id:DATA_INT Len 2 Prec 0; Store_id:data_int Len 1 Prec 0; First_name:type Len 135 Prec 0; Last_name:type Len 135 Prec 0; Email


Type len Prec 0; Address_id:data_int Len 2 Prec 0; Active:data_int Len 1 Prec 0; Create_date:data_int Len 8 Prec 0; Last_update:data_int Len 4 PR


EC 0; Db_row_id:data_sys Prtype 256 len 6 prec 0; Db_trx_id:data_sys prtype 257 len 6 Prec 0; Db_roll_ptr:data_sys Prtype 258 len 7 Prec 0;


Index:name PRIMARY, id 0 286, Fields 1/11, type 3


Root page, appr.key vals 0, leaf pages 1, size pages 1


fields:customer_id db_trx_id db_roll_ptr store_id first_name last_name email address_id active create_date


Index:name idx_fk_store_id, id 0 287, Fields 1/2, type 0


Root page, appr.key vals 0, leaf pages 1, size pages 1


fields:store_id customer_id


Index:name idx_fk_address_id, id 0, fields 1/2, type 0


Root page, appr.key vals 0, leaf pages 1, size pages 1


fields:address_id customer_id


Index:name Idx_last_name, id 0 289, Fields 1/2, type 0


Root page 1493, appr.key vals 0, leaf pages 1, size pages 1


Fields:last_name customer_id





Here, we restore the Customer table under the Sakila Library, from which we can get the primary key information:


Copy Code code as follows:

Index:name PRIMARY, id 0 286, Fields 1/11, type 3



The Index ID is 0 256, so the InnoDB page we need to recover is located in 0-256 subdirectories.


Note:The reference document in the original text describes the above method of obtaining the index ID of the table, in the actual operation, take a simpler way, that is, directly restore all the Page_parser generated InnoDB pages. Practice has proved that this method is also feasible:


5. Generate table Definition
In step 4, we've found the data we need, and then we need to find the table structure, create the table definition, compile it into constraints_parser, and then use the tool to extract the rows from the InnoDB page from the table.


The table definition contains the columns, column order, and data type in the table. If the MySQL server is still running and the table is not dropped, then simple utility show CREATE table can collect this information. The table structure information is then used to create a table definition of the C-struct identity, which is then compiled into the Constraints_parser tool. The definition of c structure is stored in include/table_defs.h.


The easiest way to do this is to create_defs.pl Perl scripts, connect to MySQL server, read the results of Show CREATE table, and output generated tables to standard output. Here is an example where the results are redirected directly to the INCLUDE/TABLE_DEFS.H:





If possible, the easiest way to create the table definition are with the create_defs.pl Perl script. It connects to the MySQL server and reads show CREATE TABLE output, and prints the generated definition to its standard OU Tput. This is example:


Copy Code code as follows:

$./create_defs.pl--host=localhost--user=root--password=123456--db=sakila--table=customer > Include/table_ Defs.h



here is the table structure in the example:
Copy Code code as follows:



CREATE TABLE ' Customer ' (


' customer_id ' smallint (5) UNSIGNED not NULL auto_increment,


' store_id ' tinyint (3) UNSIGNED not NULL,


' first_name ' varchar not NULL,


' last_name ' varchar not NULL,


' Email ' varchar DEFAULT NULL,


' address_id ' smallint (5) UNSIGNED not NULL,


' Active ' tinyint (1) Not NULL DEFAULT ' 1 ',


' Create_date ' datetime not NULL,


' Last_update ' timestamp not NULL DEFAULT current_timestamp on update current_timestamp,


PRIMARY KEY (' customer_id '),


KEY ' idx_fk_store_id ' (' store_id '),


KEY ' idx_fk_address_id ' (' address_id '),


KEY ' Idx_last_name ' (' last_name '),


CONSTRAINT ' fk_customer_address ' FOREIGN KEY (' address_id ') REFERENCES ' address ' (' address_id ') on UPDATE CASCADE,


CONSTRAINT ' Fk_customer_store ' FOREIGN KEY (' store_id ') REFERENCES ' store ' (' store_id ') on UPDATE CASCADE


) Engine=innodb DEFAULT Charset=utf8





The following is the generated table definition:
Copy Code code as follows:



#ifndef Table_defs_h


#define Table_defs_h


Table definitions


table_def_t table_definitions[] = {


{


Name: "Customer",


{


{/* smallint (5) unsigned * *


Name: "Customer_ID",


Type:ft_uint,


Fixed_length:2,


Has_limits:true,


Limits: {


Can_be_null:false,


uint_min_val:0,


uint_max_val:65535


},


Can_be_null:false


},


{/* Innodb ' s internally used field *


Name: "Db_trx_id",


Type:ft_internal,


Fixed_length:6,


Can_be_null:false


},


{/* Innodb ' s internally used field *


Name: "Db_roll_ptr",


Type:ft_internal,


Fixed_length:7,


Can_be_null:false


},


{/* tinyint (3) unsigned * *


Name: "store_id",


Type:ft_uint,


Fixed_length:1,


Has_limits:true,


Limits: {


Can_be_null:false,


uint_min_val:0,


uint_max_val:255


},


Can_be_null:false


},


{/* VARCHAR (45) * *


Name: "First_Name",


Type:ft_char,


min_length:0,


MAX_LENGTH:45,


Has_limits:true,


Limits: {


Can_be_null:false,


char_min_len:0,


CHAR_MAX_LEN:45,


Char_ascii_only:true


},


Can_be_null:false


},


{/* VARCHAR (45) * *


Name: "Last_Name",


Type:ft_char,


min_length:0,


MAX_LENGTH:45,


Has_limits:true,


Limits: {


Can_be_null:false,


char_min_len:0,


CHAR_MAX_LEN:45,


Char_ascii_only:true


},


Can_be_null:false


},


{/* VARCHAR (50) * *


Name: "Email",


Type:ft_char,


min_length:0,


MAX_LENGTH:50,


Has_limits:true,


Limits: {


Can_be_null:true,


char_min_len:0,


CHAR_MAX_LEN:50,


Char_ascii_only:true


},


Can_be_null:true


},


{/* smallint (5) unsigned * *


Name: "Address_id",


Type:ft_uint,


Fixed_length:2,


Has_limits:true,


Limits: {


Can_be_null:false,


uint_min_val:0,


uint_max_val:65535


},


Can_be_null:false


},


{/* tinyint (1) * *


Name: "Active",


Type:ft_int,


Fixed_length:1,


Can_be_null:false


},


{* * datetime * *


Name: "Create_date",


Type:ft_datetime,


Fixed_length:8,


Can_be_null:false


},


{/* Timestamp * *


Name: "Last_update",


Type:ft_uint,


Fixed_length:4,


Can_be_null:false


},


{Type:ft_none}


}


},


};


#endif





If you want, you can edit the include/table_defs.h as needed, and then recompile it according to Include/table_defs.h Constraints_parser Tools:
Copy Code code as follows:

$ make
Gcc-dhave_offset64_t-d_file_offset_bits=64-d_largefile64_source=1-d_largefile_source=1-g-I. include-i Mysql-source/include-i mysql-source/innobase/include-c Tables_dict.c-o LIB/TABLES_DICT.O
Gcc-dhave_offset64_t-d_file_offset_bits=64-d_largefile64_source=1-d_largefile_source=1-g-I. include-i Mysql-source/include-i mysql-source/innobase/include-o constraints_parser constraints_parser.c LIB/TABLES_DICT.O LIB/PRINT_DATA.O LIB/CHECK_DATA.O lib/libut.a LIB/LIBMYSTRINGS.A
Gcc-dhave_offset64_t-d_file_offset_bits=64-d_largefile64_source=1-d_largefile_source=1-g-I. include-i Mysql-source/include-i mysql-source/innobase/include-o page_parser page_parser.c lib/tables_dict.o lib/libut.a



6. Extract row records from the page
6.1 Merging pages into one file
As mentioned earlier, we need to recover the index ID 0 286 and the page containing the data is located in the pages-1246363747/0-286/directory.


Copy Code code as follows:

Total 120
-rw-r--r--1 root root 16384 June 05:09 1254-00001254.page
-rw-r--r--1 root root 16384 June 05:09 1255-00001255.page
-rw-r--r--1 root root 16384 June 05:09 1256-00001256.page
-rw-r--r--1 root root 16384 June 05:09 1257-00001257.page
-rw-r--r--1 root root 16384 June 05:09 50-00000050.page
-rw-r--r--1 root root 16384 June 05:09 74-00000050.page



Enter the following command to merge the pages:


Copy Code code as follows:

$ find pages-1246363747/0-286/-type f-name ' *.page ' | Sort-n | Xargs Cat > pages-1246363747/0-286/customer_pages_concatenated



Generated result file: pages-1246363747/0-286/customer_pages_concatenated, as input to the Constraints_parser tool.


6.2 Running Constraints_parser Tools
Below is the most central step to recovering data-run the Constraints_parser tool to extract Row Records. As with the Page_parser tool, you need to specify the InnoDB page format (compact/redundant) with the 5 or 4 parameter, and-F to specify the input file.


Back to the example, we can run the Constraints_parser tool (the following command restores a single page, or you can directly restore files after all pages are merged in 6.1 steps):


Copy Code code as follows:

$./constraints_parser-5-F Pages-1246363747/0-286/50-00000050.page



Each row in the output contains the table name and each column in the table. Note: There may be correct row records or incorrect row records. This section of the official document shows you how to adjust the table definition to get as much effective data as possible, while filtering out the garbage rows, no longer described in detail.


Copy Code code as follows:



Customer 0 120 "" "" "32770 0" 0000-00-00 00:12:80 "0


Customer 0 0 "" "" "0 0" 9120-22-48 29:44:00 "2


Customer 61953 0 "" "" "2816 0" 7952-32-67 11:43:49 "0


Customer 0 0 "" "" "0 0" 0000-00-00 00:00:00 "0


... snip ...


Customer 0 0 "" "" "0 0" 0000-00-00 00:00:00 "16777728


Customer 28262 114 "" "NULL 25965 117" 4603-91-96 76:21:28 "5111809


Customer 0 82 "" "" "22867 77" 2775-94-58 03:19:18 "1397573972


Customer 2 1 "PATRICIA" "JOHNSON" "PATRICIA." JOHNSON@sakilacustomer.org "6 1" 2006-02-14 22:04:36 "1140008240


Customer 3 1 "Linda" "WILLIAMS" "Linda." WILLIAMS@sakilacustomer.org "7 1" 2006-02-14 22:04:36 "1140008240


Customer 4 2 "BARBARA" "JONES" "BARBARA." JONES@sakilacustomer.org "8 1" 2006-02-14 22:04:36 "1140008240


Customer 5 1 "ELIZABETH" "BROWN" "ELIZABETH. BROWN@sakilacustomer.org "9 1" 2006-02-14 22:04:36 "1140008240


Customer 6 2 "JENNIFER" "DAVIS" "JENNIFER." DAVIS@sakilacustomer.org "10 1" 2006-02-14 22:04:36 "1140008240


Customer 7 1 "Maria" "MILLER" "Maria." MILLER@sakilacustomer.org "11 1" 2006-02-14 22:04:36 "1140008240


Customer 8 2 "Susan" "WILSON" "Susan". WILSON@sakilacustomer.org "12 1" 2006-02-14 22:04:36 "1140008240


Customer 9 2 "MARGARET" "MOORE" "MARGARET." MOORE@sakilacustomer.org "13 1" 2006-02-14 22:04:36 "1140008240


... snip ...


Customer 0 0 "" "" "0 0" 0000-00-00 00:00:00 "0


Customer 0 0 "" "" "0 0" 7679-35-98 86:44:53 "720578985





7. Import data into the database
Finally, in order to complete the data recovery, you need to import the output of the Constraints_parser tool in step 6 into the database using the Load Data infile command. The order is as follows:


Copy Code code as follows:

LOAD DATA INFILE '/TMP/CUSTOMER_DATA.TSV '
REPLACE into TABLE customer
FIELDS terminated by ' \ t '
Optionally enclosed by ' "'
LINES starting by ' customer\t '
(customer_id, store_id, first_name, last_name, email,
address_id, Active, Create_date, @last_update)
SET last_update = From_unixtime (@last_update);



At this point, the data recovery and import process has been completed. I hope you will not have the opportunity to practice the method introduced in this article.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.