Using recovery data from the InnoDB data file in MySQL

Source: Internet
Author: User
Tags percona

1. Brief description of the restoration principle
Because of the more detailed description in the document, here is a simple description. All INNODB data is organized in an indexed manner, and all data is stored in a 16KB block of data. The recovery process is divided into a few steps, decomposing all data files to a single 16KB size page, trying to match based on the data starting point of each page's markup, and outputting the record if the size defined by the given table is appropriate and the match is considered successful.

2. Parallel recovery
Data recovery is usually against the clock, the Pdrti tool itself is an infrastructure tool, if you use the tool to do serial recovery, the time will be very long, through simple shell script can let the Constraints_parser script work in parallel, which can greatly shorten the data recovery time. Based on practical experience, the machine is slightly better and the actual recovery time can be shortened to One-twentieth of the serial. In other words, the original need 40 hours, through the parallel may be 2 hours.

The following are two scripts for parallel recovery, for reference:

The code is as follows Copy Code

#!/bin/bash
Ws=/u01/recovery
Pagedir=/u01/recovery/pages-1372436970/fil_page_index
Logdir=/u01/recovery/log
Rectool=/u01/recovery/percona-data-recovery-tool-for-innodb-0.5/constraints_parser
CD ' DirName $rectool '
Count=0
page_count=353894
Page_done=0
Startdate= ' Date +%s '
For D1 in ' LS $pagedir '
Todo
count=$ (($count + 1))
echo "in page $d 2 at dir $d 1" > $logdir/$count. Log
Thedate= ' Date +%s '
echo "$page _done/$page _count at $thedate from $startdate"
Total= ' ls-l $pagedir/$d 1/|wc-l '
page_done=$ (($page _done+ $total))
Threads= ' PS axu|grep parser_jobs|grep-v grep|wc-l '
Echo $threads
While [$threads-GT 48];
Todo
Sleep 1
Threads= ' PS axu|grep parser_jobs|grep-v grep|wc-l '
Done
$ws/parser_jobs.sh $pagedir/$d 1 > $ws/job.log 2>&1 &
Done#!/bin/bash
Pagedir=/u01/recovery/pages-1372436970/fil_page_index
Logdir=/u01/recovery/log
Rectool=/u01/recovery/percona-data-recovery-tool-for-innodb-0.5/constraints_parser
logfile= "$logdir/' basename '. Log"
echo "$" > $logfile
If [-D $];then
For D2 in ' LS $ '
Todo
$rectool-5-F $1/$d 2 >> $logfile 2>/dev/null
Done
Fi

3. Recovering from the Index
If you know the index structure of the datasheet, if the data is partially corrupted, but the index is partially intact, you can extract more field information in this way.

4. Problem-handling in emergency situations
The technical summary of the next kitchen mentioned, "The first time to stop MySQL prevent the hard drive from continuing to write this contingency is wrong", normal if the process is not closed, the files opened by the process will not be overwritten, and you can recover the files that are still open by copying from the/proc file system ( Reference: Recovering files From/proc). If both the data file and the log file are capable of CP, there is a desire to have MySQL launch itself and restore the current consistent data based on the transaction log.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.