Infortrend ESDS RAID6 Data recovery process

Source: Internet
Author: User
Tags file system

[Data recovery failure description]

See <infortrend ESDS RAID6 after the failure of the data Recovery scheme > Article description, Infortrend esds-s12f-g1440 storage, within 12 2TB hard drive composed of RAID6, a GPT partition, file system for NTFS, Size is 18.2TB. 3 hard drive offline after forced activation, and did a few minutes rebuild, found that the data error.

[Data recovery process]

1. Install the Windows 2008R2 system using Dell R720 as the recovery server platform. On the Dell R720 server Nega Dell H200 6G expansion card, connect 2 groups of Dell MD1200 disk arrays on H200. Group A MD1200 connect all 12 2T source disks, Group B is connected to 12 2T target hard drives.

2, keep all the disks in Group a WINDOWS2008 R2 offline, activate all B-group hard drives. Use the North Asian disk mirroring tool to mirror all group A 12 hard drives to Group B 12 hard drives.

3, after the completion of the mirror, the shutdown, the source disk all down, sealed well, no longer operate the source disk.

4, using the disk editor, the 12 mirror disk to do structural analysis, found that each hard disk front has a clear trace of RAID information, according to find out the raid in the beginning of the allocation of the LUN location.

5, the RAID6 algorithm guessing inference, found that it is based on the standard p and another unknown algorithm law Q of the right asynchronous. But applying the Reed-Solomon (Reed-solomon) algorithm does not conform, according to the analysis of all the data on the Internet, only the RAID6-Solomon (Reed-solomon) algorithm, which is based on the uniform helical distribution of PQ, is suspected to be variant, but there is a case of 0 after the same band full 0 positions, So overturn this judgment.

6, combined with the test of the controller, found that its Q-check based on a random xor, like Park code, the algorithm is completely random, but the calibration distribution is completely different from park, so even if the idea is similar, but the algorithm is completely different.

7, need to get 12 pieces of the disk in advance of all the missing 2 pieces of the complete algorithm, total c (12,2) = 66 Types of missing disks, each of which requires at least 16 operating rules, after the program is running (because of the complexity of the operation can not be proved), found in order to get a unit, about 30-50 Times XOR operation.

8, the formula generated by the program, there are more than 140 k size, that is, a total of about 140,000 characters. Such complex operations can have an impact on the data recovery cycle and require optimization algorithms.

9, the Optimization algorithm module, the introduction can simplify the algorithm of the intermediate variable layer, the algorithm compressed to the original about 50% (clear text).

10, for a distinctly different step of the data block area, a program was written to calculate all C (12,2), and then compare the calculated results with the expected results. Through such a few processes, clear off the line plate NO. 0 and 3rd plate.

11, the algorithm for binary optimization, all operations to abandon the STL, instead of an array, and using the concept of bitmap to represent all members of the expression, to achieve the maximum performance of the algorithm.

12, according to the algorithm and analysis of the structure of the data for preliminary analysis, no obvious data anomalies.

13, the generation of data to another 20T target storage.

[Data recovery time consuming]

Disk Mirroring: 7 hours

Analysis algorithm: It takes about 60 days on and off. This project is my work to pay the longest period of engineering, in view of a completely unprecedented algorithm, a great deal of research enthusiasm support I wrote nearly million lines of code to judge, analyze, optimize, test, restore. Thank users for relying on North Asia Data Recovery Center, give us enough time. (Structure and part of the algorithm process I will publish another blog to describe it)

Export data: approx. 100 hours

[Data Recovery Results]

100% Data Recovery success (do not rule out some of the data has a little damage, but as of the press, random test verification of the data are all no exception)

This article is from the "Tommy (Data Recovery)" blog, please be sure to keep this source http://zhangyu.blog.51cto.com/197148/1180307

See more highlights of this column: http://www.bianceng.cn/database/storage/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.