NTFS file system for build evaluation in Linux

Source: Internet
Author: User
Tags md5 hash

Objective:

Generate an NTFS file system that requires:

1, $MFT at least 2 pieces of debris

2. The root directory is built with 90 sub-directories numbered from 1, with 80-100 files in each subdirectory, and the file numbering starts with a 1.

3, a large number of files are composed of 2 or more fragments. (This example is more than 2 fragments)


1. Shell scripts are as follows:

#!/bin/sh#  ##  created by www.frombyte.com     Tommy  on  2017/3/29. Script may be updated, see attachment #mkdir /mnt/paddingcd /mnt/padding# First paragraph for, create 30 subdirectories, 80-100 files per directory, Size is 16k-48k, this section of basic continuous for ((i=1;i<=30;i++));d o mkdir /mnt/$i  r1=$ (($RANDOM  % 20))   For ((ii=1;ii<80+ $r 1;ii++));d o  r2=$ (($RANDOM  % 8 + 4))   dd if= /dev/urandom of=/mnt/$i/$ii  bs=4096 count= $r 2 donedone#sleep 60  Easy for the file system to flushsleep 60# the second paragraph for, using DD to create the first paragraph for the file, from the 0-16k location, write 48k-80k, basic guarantee about 2 fragments for ((i=1;i<=30;i++));d o  cd /mnt/$i  r2=$ (($RANDOM  % 8 + 12)  r3=$ (($RANDOM  % 4))  for ii in  ' ls ';d o  r2=$ (($RANDOM  % 8 + 12))   dd  if=/dev/urandom of=/mnt/$i/$ii  bs=4096 seek= $r 3 count= $r 2 donedone# fill 65,000 files, This allows the ntfs  $MFT to populate the first used data segment, resulting in a $mft&nbspDebris For ((i=1;i<65000;i++));d o touch  # After the two paragraph for, acting as the first, second paragraph for, not surprisingly, the file system of these files will be written to the second fragment of $MFT, A comprehensive survey of knowledge that is more capable of human tracking. For ((i=31;i<=90;i++));d o mkdir /mnt/$i  r1=$ (($RANDOM  % 20)  for ((ii=1;ii< 80+ $r 1;ii++));d o  r2=$ (($RANDOM  % 8 + 4))   dd if=/dev/urandom  of=/mnt/$i/$ii  bs=4096 count= $r 2 donedonesleep 60for ((i=31;i<=90;i++));d O&NBSP;CD  /mnt/$i  r2=$ (($RANDOM  % 8 + 12))  r3=$ (($RANDOM  % 4))  for  ii in  ' ls ';d o  r2=$ (($RANDOM  % 8 + 12))   dd if=/ dev/urandom of=/mnt/$i/$ii  bs=4096 seek= $r 3 count= $r 2 donedone# Delete padding, so that the directory structure is too bloated rm  -rf /mnt/padding


2. Execute the following command in the shell:

Qemu-img create-f Raw test2.img 1gqemu-nbd-f raw-c/dev/nbd0 test2.imgfdisk/dev/nbd0 #此命令交互 for/dev/nbd0 Sub-zone, do not want to interact, you can use P arted plus parameter mkfs.ntfs-f/dev/nbd0p1mount.ntfs-3g/dev/nbd0p1/mnt/bin/bash run.sh

3, the test results are satisfactory:

Command one: ntfscluster-f-i 0/dev/nbd0p1

The results are consistent with expectations, and $MFT are 2 pieces:

Forced to continue.

Dump:/$MFT

0x10-resident

0x30-resident

0x80-non-resident

VCN LCN Length

0 4 16387

16387 20488 1880

0xb0-non-resident

VCN LCN Length

0 2 2

2 16391 1



Command two: Ntfscluster-f-F 1//DEV/NBD0P1

The results are consistent with expectations, and the sample catalogue, also 2 pieces

Forced to continue.

unnormalized Path 1/

Dump:/1

0x10-resident

0x30-resident

0x50-resident

0x90-resident

0xa0-non-resident

VCN LCN Length

0 53328 2

2 49238 1

0xb0-resident



Command three: ntfscluster-f-F 60/9/dev/nbd0p1

The results are consistent with expectations, and the sample files are also 2 pieces

Forced to continue.

Dump:/60/9

0x10-resident

0x30-resident

0x50-resident

0x80-non-resident

VCN LCN Length

0 211991 8

8 115298 7


4. Generate a MD5 hash of all files for easy generation of evaluation answers

Cd/mntfind. -type F-print|xargs md5sum-b |tr A-Z


5. Generate fragmentation information for all files to facilitate the generation of assessment answers

Cd/mntfor i in ' Find. -type f ';d o ntfscluster-f-F $i/dev/nbd0p1;done 2>/dev/null


6, generate the fragment information of the directory, easy to generate assessment answers

Cd/mntfor i in ' Find. -type d ';d o ntfscluster-f-F $i/dev/nbd0p1;done 2>/dev/null


7. Generate the file record information of the meta-file, which is easy to generate assessment answers

For ((i=0;i<16;i++));d o ntfscluster-f-i $i/dev/nbd0p1;done 2>/dev/null

or by command: Ntfscluster-i-f/dev/nbd0p1,

View the following values: Initialized MFT records:73115

Then execute, the following command can be all the files to print the fragmentation information, and then do processing, you can generate assessment answers

For ((i=0;i<73115;i++));d o ntfscluster-f-i $i/dev/nbd0p1;done 2>/dev/null



Questions and Answers generated section:

1, $MFT: $DATA

2, $MFT: $BITMAP

3, in the first fragment of $MFT, with $data:runlist only a record of the document, to answer its MD5

4, in the second fragment of $MFT, with $data:runlist Two records of the document, answer its MD5

5. Read the MD5 of a directory block with at least 2 A0 attributes

6. Given a starting cluster number, explain the first 3 records of a runlist (at least 3 records in the sample)

7, restore a deleted file (can generate an answer, delete a directory, and then restore a file below it)

This article is from the "Zhang Yu (Data Recovery)" blog, please be sure to keep this source http://zhangyu.blog.51cto.com/197148/1911271

NTFS file system for build evaluation in Linux

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.