MFS mount point/mnt/mfs
First, the simulation: the client mistakenly delete the source package
Recovery process:
a)./reserved./trash the two directories appear
b)./trash, there is a./undel directory, with some deleted directories named 8-bit 16, and with the "|" As a directory symbol, plus the name of the deleted file. (If the file name is larger than the 255 maximum length supported by the system, it is cropped from left to right until 255 is satisfied)
Rule: 00000009|1,1 represents the deleted file.
0000002E|123|TST represents the 123 directory under the TST file, if the 123 directory is deleted together, the recovery time 123 This directory will be restored together.
c) If you want to restore the file, 00000009|1 the file to/mnt/mfsmeta/trash/undel, the file can be restored.
d)./reserved This directory is intended for files that have been completely deleted but are currently being opened.
[Email protected] mfs]# ll/mnt/mfs/hello/total 16-rw-r--r--1 root root 0 Nov 03:49 1.txt-rw-r--r--1 root root 1 4540 Nov 07:12 epel-release-6-8.noarch.rpm-rw-r--r--1 root root 931 Nov 03:49 Passwdbak
1. deleting files
Cd/mnt/mfs/hello/[[email protected] hello]# RM epel-release-6-8.noarch.rpm rm:remove Regular file ' epel-release-6-8.noarch.rpm '? Y
2. Create a file restore directory
Mkdir/mnt/test/usr/local/mfs/bin/mfsmount-m/mnt/test/-H 192.168.50.119
3. Recover files
Cd/mnt/test/trash/[[email protected] trash]# lltotal 15-rw-r--r--1 root root 14540 Nov 07:12 00000006|hello|epel-rele Ase-6-8.noarch.rpmd-w-------2 root root 0 Dec 1 22:02 undel[[email protected] trash]# mv 00000006\|hello\|epel-relea se-6-8.noarch.rpm undel/
4. To the MFS client mount point view
[Email protected] mfs]# ll/mnt/mfs/hello/total 16-rw-r--r--1 root root 0 Nov 03:49 1.txt-rw-r--r--1 root root 1 4540 Nov 07:12 epel-release-6-8.noarch.rpm-rw-r--r--1 root root 931 Nov 03:49 Passwdbak
Second, the simulation metadata server process was unexpectedly ended, perform a recovery operation
1) Stop the meta-data server
Kill-9 Kill Mfsmaster Process
2) backing up metadata server data
# Cd/usr/local/mfs/var
# tar CVF Mfs.tar MFS
3) Start the meta-data server
/usr/local/mfs/sbin/mfsmaster start
Prompt for initialization of data failed
4) Perform recovery operations
#/usr/local/mfs/sbin/mfsmetarestore-a
5) Start the meta-data server
/usr/local/mfs/sbin/mfsmaster start
6) Client mount Verify that the data is still present
Third, the simulation process was closed unexpectedly, and the log file was destroyed.
1) Stop the meta-data server
Kill-9 Kill Mfsmaster Process
2) Back up the metadata server data (this operation is mainly to prevent the failure of the experiment, the cluster can not be recovered and do a preventive action)
# Cd/usr/local/mfs/var
# tar CVF Mfs.tar MFS
3) Delete directory, simulate failure
# RM-RF mfs/*
4) Start the meta-data server
/usr/local/mfs/sbin/mfsmaster start
Prompt for initialization of data failed
5) Restore the backup files from the meta-data log server.
# RSYNC-ALVR 192.168.242.133:/usr/local/mfs/var/mfs/
Remove all the file names _ml
6) Perform recovery operations
#/usr/local/mfs/sbin/mfsmetarestore-a
7) Start the meta-data server
/usr/local/mfs/sbin/mfsmaster start
8) Client mount Verify that the data is still present
Some of the operations used by MFS
/usr/local/mfs/bin/mfssetgoal -r 3 hello/ Set the number of backups hello1/: inodes with goal changed: 1 inodes with goal not changed: 0 inodes with permission denied: 0/usr/local/mfs/bin/mfsgetgoal hello/ View the number of file backups hello/: 3cp /etc/passwd hello/passwdbak Copying Files [[email protected] mfs]# /usr/local/mfs/bin/mfsfileinfo hello/passwdbak view file specific information hello/passwdbak: chunk 0: 0000000000000001_00000001 / (id:1 ver:1) copy 1: 192.168.50.120:9422[[email protected] mfs]# /usr/local/mfs/bin/ mfscheckfile hello/passwdbak View the actual number of copies of a file hello/passwdbak: chunks with 1 copy: 1 Set the Recycle Bin emptying time/usr/local/mfs/ Bin/mfssettrashtime 600 /mnt/mfs/ ps:600 is in seconds, that is, the Recycle Bin file is saved for 10 minutes
Snapshot
Another feature of the Moosefs system is the use of mfsmakesnapshot tools to take snapshots of files or directory trees, for example:
$ mfsmakesnapshot Source ... Destinationmfsmakesnapshot/usr/local/mfs/bin/mfsmakesnapshot/mnt/mfs/123/tmp2/mnt/mfs /111/This command is a CP process and will automatically tmp2 this file to CP to 111 directory. You can also operate on a directory. Mfsappendchunks destination-file Source-file ... When there are multiple source files, their snapshots are added to the same destination file (the maximum amount per chunk is chunk). /USR/LOCAL/MFS/BIN/MFSAPPENDCHUNKS/MNT/MFS/111/SHOT/MNT/MFS/123/123/MNT/MFS/123/TMP2 1 or more source files into a package, This is to append the blocks of 123 and TMP2 two files to the shot block file. Note: The source and destination must all belong to the MFS system, which means that the files in the MFS system cannot be snapped to other file systems.
Maintain MFS
Maintenance of MFS The most important thing is to maintain the metadata server/USR/LOCAL/MFS/VAR/MFS/,MFS storage modification updates and other operational changes are
is recorded in this directory, so as long as the data security of this directory can guarantee the data security and reliability of the entire MFS.
The order in which the MFS cluster is started. The safe Boot sequence steps are:
Start mfsmaster-> start all mfschunkserver-> start the Mfsmetalogger process
Stop the MFS cluster
In all clients uninstalling MFS file system, Mfschunkserver-s stop all data store processes, Mfsmetalogger-s stop metadata Log service process, Mfsmaster-s stop Management Server process
Data Storage server maintenance, if each file's goal target is not less than 2, and there is no Under-goal file, then a data storage server can stop or restart at any time
MFS Management Server Recovery
If the meta-Management server crashes, you need the last metadata to change the log changelog and the main metadata file Metadat.mfs, which
The operation can be done through the Mfsmetarestore tool.
Mfsmetarestore-a
After executing this command, the/usr/local/mfs/var/mfs directory automatically finds the log file and the primary metadata that need to be changed by default, and Mfsmetarestore automatically locates the Metadata.mfs.back file when the command resumes.
The steps to recover the MFS Management Server from backup are as follows
1) Install a meta-Management Server to configure this server with the same configuration
2) Retrieve the Metadata.mfs.back file, or you can find it from the metadata log server that is started, and put it in the data directory such as/usr/local/mfs/var/mfs/
3) Copy the last Changelog.*.mfs file into the meta-Management server's data directory from any running metadata log server before the meta-Management Server goes down
4) Use Mfsmetarestore-a to recover
Reference address
Http://www.wtoutiao.com/p/BeevDt.html
Http://www.tanpao.com/archives/39
This article from "Do not abandon!" Do not give up "blog, be sure to keep this source http://thedream.blog.51cto.com/6427769/1878475
Summary of the use of MFS distributed systems