I. Structure Description
1. The data of the MFs client is created by Mount or the web server calls the content to be uploaded to the MFs client through a program (equivalent to the original NFS image server ).
2. The master metadata server is responsible for management and scheduling. It only keeps metadata logs (logs in non-log files) and delivers real data to the chunk storage server.
Currently, multiple master nodes are not supported for single point of failure (spof). You can add a metalogger server as the master backup, regularly download metadata, and synchronize the changlog change files in real time (by default, they are synchronized once every 24 hours, it can be set to synchronize at least once every hour ).
3. The master node schedules the distribution of real data to each chunk server, and the files are divided into chunk blocks and copied and stored in copies between chunks.
The chunk server or disk space can be expanded without stopping services in the future.
Ii. installation and configuration
1. download the latest moosefs1.6.20 source code package:
Http://sourceforge.net/projects/moosefs/files/moosefs/1.6.20/mfs-1.6.20-2.tar.gzhttp://sourceforge.net/search? Q = fuserpm package download: http://pkgs.org/centos-5-rhel-5/rpmforge-i386/mfs-1.6.20-1.el5.rf.i386.rpm/downloadhttp://pkgs.org/search? Keyword = fuse 2. usage Distribution: master server: 192.168.40.140 master Backup Server metalogger server: 192.168.40.185 chunk01 server: 192.168.40.183chunk02 server: 192.168.40.184web client: 192.168.40.1443. installation specifications:
<Xmlnamespace prefix = "O"/> |
Installation path |
MFs User Account |
Data storage path |
Main configuration file |
Master |
/Usr |
MFs: MFs |
None |
/Etc/mfsmaster. cfg /Etc/mfsexports. cfg |
Metalogger |
/Usr |
MFs: MFs |
None |
/Etc/mfsmetalogger. cfg |
Chunk01 |
/Usr |
MFs: MFs |
/Data |
/Etc/mfschunkserver. cfg /Etc/mfshdd. cfg |
Chunk02 |
/Usr |
MFs: MFs |
/Data |
/Etc/mfschunkserver. cfg /Etc/mfshdd. cfg |
MFs-Client |
/Usr |
None |
Local mount point/mnt |
None |
4. Master Server Installation
1) # groupadd MFs # useradd-G MFs create MFs group, user 2) source code compilation can cancel chunk Server Installation and MFs client mfsmount Module #. /configure -- prefix =/usr-sysconfdir =/etc-localstatedir =/var/lib -- With-default-user = MFs -- With-default-group = MFs-Disable-mfschunkserver- disable-mfsmount # Make & make install note: -- localstatedir =/var/lib binary file metadata and text file changelog will be saved in the directory. 3) automatically generated under/etc after successful installation. dist sample configuration file: # cp mfsmaster. cfg. dist mfsmaster. CFG # cp mfsmetalogger. cfg. dist mfsmetalogger. CFG # cp mfsexports. cfg. dist mfsexports. CFG Note: The lines commented out in the configuration file are the default values built into moosefs. 4) Major configuration changes: mfsmaster. CFG contains the mfsexports parameter settings of the master server master. CFG specifies which client hosts can be remotely attached and Access Permissions
# Vi mfsmaster. cfg
- # Working_user = MFs
- # Working_group = MFs
- # Syslog_ident = mfsmaster
- # Lock_memory = 0
- # Nice_level =-19
- # Exports_filename =/etc/mfsexports. cfg
- # Data_path =/var/lib/MFs metadata storage path specified during compilation, used to back up the change log of the metadata server
- Back_logs = 24 must be consistent with the configuration on the metalogger server. It stores the past 24 hours of metadata change log changelog. *. MFs.
- # Replications_delay_init = 300
- # Replications_delay_disconnect = 3600
- # Matoml_listen_host = *
- # Matoml_listen_port = 9419
- # Matocs_listen_host = *
- # Matocs_listen_port = 9420 the metadata server uses the 9420 listening port to accept the connection from the chunkserver of the data storage server.
- # Matocu_listen_host = *
- # Matocu_listen_port = 9421 the metadata server listens on port 9421 to accept the client's remote connection to MFs (the client mounts MFs with mfsmount)
- # Chunks_loop_time = 300
- # Chunks_del_limit = 100
- # Chunks_write_rep_limit = 1
- # Chunks_read_rep_limit = 5
- # Reject_old_clients = 0
- # Deprecated, to be removed in moosefs 1.7
- # Lock_file =/var/run/MFs/mfsmaster. Lock the location where the file lock is located. Avoid starting the same Daemon Multiple times.
# Vi mfsexports. cfg
192.168.40.144/RW, alldirs, maproot = 0 grant the MFs-Client: 192.168.40.144 client mounting permission and read/write permission. 5) when the master is installed for the first time, a metadata is automatically generated. MFs. metadata File metadata of empty. The file is empty and the master must have a file metadata. MFs # cp/var/lib/MFs/metadata. MFs. empty/var/lib/MFs/metadata. mfs6) Modify/etc/hosts bind the host name mfsmaster and IP: 192.168.40.140 bind # vi/etc/hosts192.168.40.140 mfsmaster7) start the service and set it to start/usr/sbin/mfsmaster start/usr/sbin/mfscgiserv start the CGI monitoring service that comes with MFs to monitor the running status of moosefs and port 9425. You can access http: // 192.168.40.140: 9425 # vi/etc/rc through the Web interface. set local to boot/usr/local/MFs/sbin/mfsmaster start/usr/local/MFs/sbin/mfscgiserv8) view the startup process # ps aux | grep MFs # netstat natlp5. master Backup Server metalogger installation configuration 1) install the same master, create a user and group MFs, and then compile and install it. 2) generate the configuration file # cp/etc/mfsmetalogger. cfg. Dist/etc/mfsmetalogger. cfg # vi/etc/mfsmetalogger. cfg
- # Working_user = daemon
- # Working_group = daemon
- # Syslog_ident = mfsmetalogger
- # Lock_memory = 0
- # Nice_level =-19
- # Data_path =/var/MFs
- Back_logs = 24: The total number of backup logs stored is 24. If the number exceeds 24, the log is rotated.
- Meta_download_freq = 1 synchronize data from the master every hour
- # Master_reconnection_delay = 5
- Master_host = 192.168.40.140 specify the master Host IP Address
- # Master_port = 9419 prepare to connect to the metadata server port
- # Master_timeout = 60
- # Deprecated, to be removed in moosefs 1.7
- # Lock_file =/var/run/MFs/mfsmetalogger. Lock
3) Modify/etc/hosts to bind the host name mfsmaster to the IP address 192.168.40.140 to # vi/etc/hosts192.168.40.140 mfsmaster4) start the service and set it to boot # vi/etc/rc. set local to start/usr/sbin/mfsmetalogger start
6. install and configure the chunk servers of the storage block server (the two chunks are the same)
1) # groupadd MFs # useradd-G MFs create user and group mfs2) Compile and install :#. /configure -- prefix =/usr-sysconfdir =/etc-localstatedir =/var/lib -- With-default-user = MFs -- With-default-group = MFs-Disable-mfsmaster can cancel the installation of the master module. # Make & make install3) configuration file required by the chunk server # cp/etc/mfschunkserver. cfg. dist/etc/mfschunkserver. CFG # cp/etc/mfshdd. cfg. dist/etc/mfshdd. cfg. dist: configure the location of the shared space used by the root partition of the client.
# Vi/etc/mfschunkserver. cfg
- # Working_user = MFs
- # Working_group = MFs
- # Syslog_ident = mfschunkserver
- # Lock_memory = 0
- # Nice_level =-19
- # Data_path =/var/lib/MFs
- # Master_reconnection_delay = 5
- # Bind_host = *
- Master_host = 192.168.40.140 specify the IP address of the connected master metadata server
- # Maid = 9420
- # Master_timeout = 60
- # Csserv_listen_host = *
- # Csserv_listen_port = 9422 is used to listen for connections (data replication) with other data storage servers. If multiple chunkservers are stored
- # Hdd_conf_filename =/etc/mfshdd. cfg: Path of the disk space configuration file allocated to MFs
- # Hdd_test_freq = 10
- # Deprecated, to be removed in moosefs 1.7
- # Lock_file =/var/run/MFs/mfschunkserver. Lock
- Back_logs = 24
- # Csserv_timeout = 5
Back_logs description:
Usually, metadata has two parts of data:
3.1) The main Metadata File metadata. MFs is named metadata. MFs. Back when the mfsmaster runs.
3.2) the log changelog. *. MFs stores the file changes in the past n hours (the value of N is set by the back_logs parameter ).
The primary metadata file needs to be backed up regularly. The backup frequency depends on how many hours changelogs is stored. The metadata changelogs should be automatically copied in real time. Since moosefs 1.6.5, these two tasks are performed by the mfsmetalogger daemon. # Vi/etc/mfshdd. CFG/Data specifies that the shared space used for client mounting is/data. Note: We recommend that you divide separate space on the chunk server, preferably a separate hard disk or a raid volume, and the size of a single storage is no less than 2 GB (because the chunk occupies a certain amount of space when initializing the disk ).
4) Modify/etc/hosts to bind the host name mfsmaster to the IP address 192.168.40.140.
# Vi/etc/hosts192.168.40.140 fsmaster5) start the service and set it to start # vi/etc/rc. local/usr/sbin/mfschunkserver start chunkserver working directory:/datalockfile created and lockedinitializing mfschunkserver modules... scanning Folder/data /... /data/: 0 chunks foundscanning completemain Server Module: Listen on *: 9422no charts data file-initializing empty chartsmfschunkserver daemon initialized properly7. MFs-Cl Ient Client installation configuration 1) # useradd-S/sbin/nologin MFs create user mfs2) Fuse installation: MFs-client is mounted to the master-server through the fuse kernel interface. #. /Configure-Prefix =/usr & make install3) Compile and install the client MFs #. /configure -- prefix =/usr-sysconfdir =/etc-localstatedir =/var/lib -- With-default-user = MFs -- With-default-group = MFs-Disable-mfsmaster- disable-mfschunkserver-enable-mfsmount # Make & make install cancel the installation of master and Chunk modules, load the Mount dynamic module and ensure that the fuse module is installed and added to the kernel: # modprobe fuse4) modify the bound host name mfsmaster and IP Address: 192.168.40.140 # vi/etc/hosts192.168.40.140 mfsmaster5) create a local mount point/mnt with the MFs read and write permissions of the master group and mount it to the master (changed) # mkdir/mnt/MFs # mkdir/mnt/mfsmeta #/usr/bin/mfsmount/mnt-H 192.168.40.140 #/usr/bin/mfsmount/mfsmeta-H 192.168.40.140-O mfsmeta mounting junk file directory # ls/mnt/mfsmeta reserved trash # mount to view MFs #192.168.40.140: 9421 on/mnt type fuse (RW, nosuid, nodev, allow_other, default_permissions) mfsmeta #192.168.40.140: 9421 on/mfsmeta type fuse (RW, nosuid, nodev, allow_other, login) add to/etc/rc. local automatic mounting of MFs #/usr/bin/mfsmount/mnt-H 192.168.40.140 #/usr/bin/mfsmount/mfsmeta-H 192.168.40.140-O mfsmeta6) # DF-H check partition status MFs #192.168.40.185: 9421 101G 1.9g 99G 2%/mnt7) Copy settings and mfsmount tool use: mfssetgoal 2/mnt sets the number of replicas to 2 3. Post-Maintenance
1. Online resizing can be performed without stopping services. When a chunk is added, the data is automatically synchronized to the new chunk server to achieve data balancing. The master node is automatically scheduled and the data is re-allocated between chunks.
2. Master-slave switchover
Master/Slave switchover is divided into two steps: one is to restore the master from metalogger; the other is to process the response from the chunk and client.
2.1 metalogger restore master
1) metalogger regularly downloads metadata files from the master and records changelog in real time. However, we have to look at how real-time this "real-time" is. The job of downloading metadata and recording changelog is a bit similar to the Daily Download benchmark and import increment of the SFRD client.
2) After the master node fails, run the metarestore command to convert the benchmark and incremental data in the metalogger to the metadata required by the master node, and then start mfsmaster. The master and metalogger can be deployed on the same machine or different machines.
3) metalogger:
CD/home/xxxx/local/MFs/sbin
./Metarestore-
./Mfsmaster
4) Description:
The metalogger server needs to back up two master configuration files. Because the configuration files do not change frequently, you can use a scheduled script to synchronize files.
Before metalogger downloads metadata, it cannot take over the master in use. In this case, the metarestore program fails to run.
Metarestore uses metadata and changelog regularly downloaded from metalogger to restore the information of the entire MFs recorded by the master when the master fails.
2.2 modify the chunk and client accordingly
1) for the client, You Need To unmount the MFs partition and restart mfsmount the IP address of the new master. If the master fails, (1) restart the server (2) use metalogger on the same machine to restore master data (3) Start the master; then the client does not need to manually perform mfsmount again, because mfsmount will automatically retry.
2) For chunk, you can modify the master IP address in the configuration file one by one, and then restart. If the master node fails, (1) restart the server (2) use metalogger on the same machine to restore master data (3) Start the master node; then the chunk does not need to be restarted, the master automatically detects the chunk.
2.3 metalogger precautions
1) the number of open files on the server where metalogger is deployed must be greater than or equal to 5000,
2) metalogger does not download metadata at startup, but waits until the download time point of the first download cycle. metalogger downloads metadata at 10 minutes and 30 seconds every hour, the interval is an integer multiple of one hour.
3) metalogger does not download metadata at startup, but does not download metadata until the download time (as described in 2. That is to say, to ensure correctness, the master and metalogger must be in good state within one hour after the start.
Iv. Test and Analysis
1. test environment of the chunk server: chunk01 192.168.40.183chunk02 192.168.40.184mfsmaster 192.168.40.140mfsclient 192.168.40.144chunk01 partition: filesystem size used avail use % mounted on/dev/xvda1 13g 1.9g 9.7g 17%/dev/xvdb1 59G 206 m 55g 1%/datatmpfs 129 M 0 129 M 0%/ dev/SHM/dev/xvdc1 122 m 5.8 m 110 m 6%/mnt/chunk1/dev/xvdd1 122 m 5.8 m 110 m 6%/mnt/chunk2/dev/xvde1 122 m 5.8 M 110 m 6%/mnt/chunk3chunk02 partition: filesystem Si Ze used avail use % mounted on/dev/xvda1 8.7g 1.9g 6.5g 23%/dev/xvdb1 49G 859 M 46g 2%/datatmpfs 129 M 0 129 M 0%/dev /SHM/dev/xvdc1 494 M 11 m 458 M 3%/mnt/chunks1/dev/xvdd1 494 M 11 m 458 M 3%/mnt/chunks2mfsclient: file System capacity used available % mount point/dev/mapper/VolGroup00-LogVol00 19G 8.0g 9.4G 46%/dev/sda1 99 m 14 M 81 m 15%/bootmfs #192.168.40.140: 9421 47G 1.5g 45g 4%/mnt note: the number of copies is mfsrsetgoal 2/mnt1) to upload data to the MFs-client, c Hunk01 and 02 occupy 6% and 3% space respectively. 2) cancel two chunks1, 2 mounted on chunk02. The front-end resets the mfshdd space of the two chunks and mounts them to/data (MFs users are required to have read and write permissions. Note: After the storage partition is remounted, restart the chunk service and reinitialize it. Result: data is re-allocated. The usage of chunk01 and 02 is 7.7%, and the current data mounting and usage are as follows: chunk01filesystem size used avail use % mounted on/dev/xvdb1 49G 1008 M 45g 3%/datachunk02filesystem size used avail use % mounted on/dev/xvdb1 59G 995 M 55g 2%/ data3) disable the MFs server and use the/usr/local/sbin/mfsmaster-s method. If you directly use kill to kill the process, the related files will not be found at the next startup, the server cannot be started normally. It can be restored through mfsmetastore-A (the PID directory in the configuration file will be deleted, the directory will be rebuilt, and then the permission will be granted ). 4) delete all data from MFs-client. By default, the space recycle time after the deleted file is one day (86400 seconds). There may be a possibility that the garbage collection has not been completed yet, storage capacity occupies a large proportion. Solution 1: We recommend that you set the time for deleting and reclaiming files to 600 s to monitor the storage capacity. After testing by other MFs testers, the space recycling time is set to 300 s, and the capacity can be completely reclaimed. Solution 2: manually delete the files in the trash directory in metamfs periodically. An mfsmeta file system needs to be installed separately. In particular, it contains the directory/trash (including information about any files that can be restored to be deleted) And/trash/undel (used to obtain files ). Only the Administrator has the permission to access mfsmeta (User UID 0, usually root ).
Mfsrsettrashtime-R 600/mnt-R is used to operate the entire directory tree. The current test has been changed to 120 seconds. Data mounting and usage after deletion: chunk01filesystem size used avail use % mounted on/dev/xvdb1 49G 220 m 46g 1%/datachunk02filesystem size used avail use % mounted on/dev/xvdb1 59G 207 M 55g 1%/ datamfsclientmfs #192.168.40.140: 9421 101G 0 101G 0%/mnt Note:/Data occupies the original local data. 5) re-upload the data to the client787m/mnt/view chunk Server: chunk01filesystem size used avail use % mounted on/dev/xvdb1 49G 1008 M 45g 3%/Data chunk02filesystem size used avail use % mounted on/dev/xvdb1 59G 995 M 55g 2% /Data mfsclientmfs #192.168.40.140: 9421 101G 1.6g 99G 2%/mnt based on the above data storage analysis is correct. chunk01, 02 after data is uploaded, a total of 2003 MB is occupied-427 MB is occupied originally = 1576 MB is occupied. The size of uploaded data is 787 MB * The number of copies is 2 = the total storage capacity should be 1574 MB. 6) Stop the chunk02-server, upload data to the client 1.8 m www_error_log master to view the system log/var/log/message: Only chunk01 log storage records, view space, are stored in chunk01. Repeat the upload, delete, and download operations, observe the chunk01 storage changes, and view the storage change logs on the master machine/var/lib/MFs/changlog. Note: during the test, a single file is uploaded, And the read speed does not exceed 10 MB. Drop one chunk02, And the other provides storage and read operations normally. 7) recover the service on the chunk02-server, rescan the disk HDD space Manager: Scanning complete and then upload the data to the client side to view the log and Chunk space size, chunk02 has enabled storage. 2. the fault recovery of the master-slave switchover mfsmaster is the log file changelog_ml that can be generated by mfsmetalogger after version 1.6. *. mFs and metadata. MFs. back is restored by the command mfsmetarestore. After recovery, the MFs file system information is exactly the same. 2.1 restore master1 from metalogger) after the metalogger service is started, metalogger regularly downloads the metadata file from the master and records changlog in real time. You can set the hourly synchronization in the configuration file parameters. Meta_download_freq = 1 2) Back Up Two configuration files of the master: mfsmaster. CFG, mfsexports. CFG to metalogger, because the configuration file is basically a one-time change, the first test SCP 192.168.40.140:/etc/mfsmaster. CFG/etc/MFs/. Later, rsync synchronizes configuration files through a scheduled script. 3) drop master: 192.168.40.140 and use metalogger Server: 192.168.40.185 # mfsmetarestore-A or # mfsmetarestore-a-d/var/lib/MFs specifies the path of the master data storage to convert the benchmark and incremental data in the metalogger into the metadatametarestore program required by the master regularly download metadata and changelog to restore the information of the entire MFs recorded by the master when the master fails. #/Usr/sbin/mfsmaster start the master service. 2.2 The chunk and the client handle the response. 1) for the client, You Need To unmount the MFs partition and restart mfsmount the IP address of the new master. 2) modify the Master Ip address specified in the two chunk server configuration files and restart the chunk service. 3. test Problem Note 1) disk planning problem: the best disk is a mount point. If a disk is broken, only data on that disk is transferred. Distributed and stored on multiple disks to improve read/write speeds. 2) Currently, hot switching between master and slave nodes is not supported. We plan to use the corosync + pacemaker + mfsmaster + metalog solution for later testing. 3) if the number of files and Chunk blocks are large, we recommend that you increase the chunk_loop_time time to reduce the load on the master. 4) currently moosefs imposes a maximum file size limit of 2 Tib, however we are considering removing this limitation in the near future, which is currently 16eib a single storage space preferably greater than 2g, I tried 512 M, two chunks, and the environment test was not correct. Then I fully realized this sentence on the official website. 5) The/data I mounted on the chunk has data. I forgot to delete it after I did the svn test. I checked whether the data size on the client and the two chunks was wrong, whether it is a copy or two copies. Later, I checked the block file from 00-ff and found several more SVN directories, which wasted a lot of time. 6) Every chunkserver sends its own disk usage increased by 256 MB for each used partition/HDD, and a sum of these master sends to the client as total disk usage. if you have 3 chunkservers with 7 HDD each, your disk usage will be increased by 3*7*256 MB (about
5 Gb). In practice, this is not usually a concern for example when you have 150 TB of HDD space .? There is one other thing. if you use disks exclusively for moosefs on chunkservers DF will show correct disk usage, but if you have other data on your moosefs disks DF will count your own files too .? If you want to see usage of your moosefs files use 'mfsdirinfo' command. the chunk initialization disk occupies 2*2 * MB of space, which is why the data displayed on the monitoring system is more than 2 GB. I always thought that the chunk or Master had a scheduling problem and could not find the cause. 1) MFs caches the structure of the file system to the memory of the master. The more files there are, the larger the memory consumption of the master. 8 GB corresponds to the number of 2500kw files, and 64 GB memory is required for 0.2 billion files. 2) the storage directory of each mount point on the chunk is 00-ff, a total of 256. The maximum size of each file is 64 mb. If the directory exceeds 64 MB, the next file is generated cyclically.
5. Common Maintenance1. Mount the File System
Mfsmount mountpoint [-D] [-F] [-S] [-M] [-N] [-p] [-H master] [-P port] [-s path] [-o opt [, opt...]
-H master: the IP address of the management server.
-P port: indicates the port number of the Management Server (master server). Fill in the variable matocu_listen_por in the mfsmaster. cfg configuration file. If the master serve uses the default port number, you do not need to specify it.
-S path: indicates the subdirectory of the mounted MFs directory. The default directory is/, that is, the entire MFs directory is mounted.
Mountpoint: the previously created directory for mounting MFs.
When starting the mfsmount process, use a-m or-O mfsmeta option to mount a secondary File System mfsmeta, the purpose of this operation is to accidentally delete a file from the moosefs volume or move the file to release the disk space, and the file passes through the restoration of the junk file storage period. For example:
Mfsmount-M/mnt/mfsmeta
Note that to mount mfsmeta, you must add the following entries to the mfsexports. cfg file of the mfsmaster:
*. RW
Check the remaining space of the moosefs volume DF:
# DF-H | grep MFs
It is important that each file can be stored with multiple copies. In this case, each file occupies a large amount of space than its own. In addition, all deleted files (trashtime) are put in a "garbage bin", so they also occupy space and their size depends on the number of copies of the files.
2. view the file storage status. Goal indicates the number of copies of the file, which can be viewed using the mfsgetgoal command or mfssetgoal to change the settings.
#/Usr/bin/mfsgetgoal/mnt
/Mnt: 2
#/Usr/bin/mfssetgoal 3/mnt
/Mnt: 3 perform recursive operations on the entire tree directory using the same mfsgetgoal-R and mfssetgoal-R operations.
# Mfsgetgoal-r/mnt/MFs-test/Test2
/Mnt/MFs-test/Test2:
Files with Goal 2: 36
Directories With Goal 2: 1
# Mfssetgoal-R 3/mnt/MFs-test/Test2
/Mnt/MFs-test/Test2:
Inodes with Goal changed: 37
Inodes with goal not changed: 0
Inodes with permission denied: 0
You can run the mfscheckfile and mfsfileinfo commands to view the actual number of copies:
# Mfscheckfile/mnt/MFs-test/test1
/Mnt/MFs-test/test1:
3 copies: 1 chunks3. set the time to recycle deleted files #/usr/bin/mfsgettrashtime-R 600/mnt 600s. You can also set the recursion option for a single file-r directory tree, the default value is 86400 s per day. 4. backup meta status: Metadata under Master:/var/lib/MFs. the MFs file is the current metadata, which is updated every hour by default. The old file will be added. back up with the suffix and add changelog. *. log. Of course, we recommend that you use moosefs metalogger for backup and recovery. 5. safely stop the moosefs cluster: unmount the moosefs file system on all clients (using umount commands or other equivalent commands) run the mfschunkserver-s command to stop the chunkserver process. Run the mfsmetalogger-s command to stop the metalogger process. Run the mfsmaster-s command to stop the master process. recover data: MASTER: mfsmetarestore-a automatic recovery mode master: mfsmetarestore-M metadata. MFs. back-O metadata. MFs changelog. *. MFs does not automatically specify file recovery. Copy the final metadata file and merge the changlog. 6. view File Information #/usr/bin/mfsfileinfo/mnt/pscs2.rar/mnt/pscs2.rar: Chunk 0: 0000000000000027_00000002/(ID: 39 Ver: 2) Copy 1: 192.168.40.183: 9422 Copy 2: 192.168.40.184: 94227. view the directory information #/usr/bin/mfsdirinfo/mnt: inodes: 12 directories: 1 files: 11 chunks: 24 length: 1019325511 size: 1019863040 realsize: 20397260808. maintenance Tool list: mfsappendchunks mfsfileinfo mfsgettrashtime mfsrgettrashtime mfssetgoal mfscheckfile mfsmakesnapshot mfsrsetgoal mfssettrashtime mfsdeleattr mfsgeteattr mfsmount mfsrsettrashtime mfssnapshot mfsdirinfomfsgetgoal mfsrgetgoal mfsseteattr mfstools