Configuration of the Moosefs Distributed File System

Source: Internet
Author: User
Tags rpmbuild

Configuration of the Moosefs Distributed File System

Moosefs:
Management server (master vm3)
Metalogger server (Metalogger) (vm6)
Data servers (chunkservers) (two vm5 vm6 servers for load balancing)
Use client computers for client mounting

1. Generate rpm for easy deployment:
[Root @ vm3 ~] # Yum install-y fuse-devel zlib-devel gcc rpm-build.x86_64
[Root @ vm3 ~] # Mv mfs-1.6.27-5.tar.gz mfs-1.6.27.tar.gz
[Root @ vm3 ~] # Rpmbuild-tbmfs-1.6.27-5.tar.gz
[Root @ vm3 ~] # Cd rpmbuild/
[Root @ vm3rpmbuild] # ls
BUILD BUILDROOT RPMS SOURCES SPECS SRPMS
[Root @ vm3rpmbuild] # cd RPMS/x86_64/
[Root @ vm3x86_64] # ls
Mfs-cgi-1.6.27-4.x86_64.rpm mfs-client-1.6.27-4.x86_64.rpm
Mfs-cgiserv-1.6.27-4.x86_64.rpm mfs-master-1.6.27-4.x86_64.rpm
Mfs-chunkserver-1.6.27-4.x86_64.rpm mfs-metalogger-1.6.27-4.x86_64.rpm

2. Master server installation:
[Root @ vm3x86_64] # rpm-ivh mfs-master-1.6.27-4.x86_64.rpm mfs-cgi *
Preparing... ######################################## ### [100%]
1: mfs-cgi ##################################### ###### [33%]
2: mfs-cgiserv ##################################### ###### [67%]
3: mfs-master ##################################### ###### [100%]
[Root @ vm3x86_64] # cd/etc/mfs/
[Root @ vm3mfs] # ls
Mfsexports. cfg. dist mfsmaster. cfg. dist mfstopology. cfg. dist
[Root @ vm3mfs] # cp mfsexports. cfg. dist mfsexports. cfg
[Root @ vm3mfs] # cp mfsmaster. cfg. dist mfsmaster. cfg
[Root @ vm3mfs] # cp mfstopology. cfg. dist mfstopology. cfg
[Root @ vm3mfs] # vim mfsexports. cfg
# Allow "meta ".
172.25.254.0/24. rw allows 172.25.254. The network segment can be written.
[Root @ vm3mfs] # vim/etc/hosts
172.25.254.3 vm3.example.com mfsmaster
[Root @ vm3mfs] # cd/var/lib/mfs/
[Root @ vm3mfs] # cp metadata. mfs. empty metadata. mfs
[Root @ vm3mfs] # chown nobody/var/lib/mfs/-R
[Root @ vm3mfs] # mfsmaster start
[Root @ vm3mfs] # mfsmaster test
Mfsmaster pid: 6643
[Root @ vm3mfs] # cd/usr/share/mfscgi/
[Root @ vm3mfscgi] # chmod + x *. cgi
[Root @ vm3mfscgi] # mfscgiserv # Start the CGI Monitoring Service

Now you can access http: // 172.25.254.3: 9425/in a browser to see all the information of this MooseFS System, including the master and the storage service chunkserver.

3. Configure the data storage server data servers (chunkservers) (vm5 vm6)
[Root @ vm3x86_64] # pwdcd
/Root/rpmbuild/RPMS/x86_64
[Root @ vm3x86_64] # scp mfs-chunkserver-1.6.27-4.x86_64.rpm 172.25.254.5:
[Root @ vm3x86_64] # scp mfs-chunkserver-1.6.27-4.x86_64.rpm 172.25.254.6:

Switch to vm5
[Root @ vm5 ~] # Rpm-ivh mfs-chunkserver-1.6.27-4.x86_64.rpm
Preparing... ######################################## ### [100%]
1: mfs-chunkserver ##################################### ###### [100%]
[Root @ vm5 ~] # Cd/etc/mfs/
[Root @ vm5mfs] # ls
Mfschunkserver. cfg. dist mfshdd. cfg. dist
[Root @ vm5mfs] # cp mfschunkserver. cfg. dist mfschunkserver. cfg
[Root @ vm5mfs] # cp mfshdd. cfg. dist mfshdd. cfg
[Root @ vm5mfs] # mkdir/var/lib/mfs
[Root @ vm5mfs] # chown nobody/var/lib/mfs/
[Root @ vm5mfs] # vim mfshdd. cfg
# Mount points of HDD drives
#
#/Mnt/hd1
#/Mnt/hd2
# Etc.
/Mnt/chunk1
[Root @ vm5mfs] # mkdir/mnt/chunk1
[Root @ vm5mfs] # chown nobody/mnt/chunk1
[Root @ vm5mfs] # mfschunkserver
Working directory:/var/lib/mfs
Lockfile created and locked
Initializing mfschunkserver modules...
Hdd space manager: path to scan:/mnt/chunk1/
Hdd space manager: start background hdd scanning (searching for available chunks)
Main server module: listen on *: 9422.
[Root @ vm5mfs] # vim/etc/hosts
Add 172.25.254.3 mfsmaster

Perform similar operations on vm6:
[Root @ vm6mfs] # vim mfshdd. cfg
# Mount points of HDD drives
#
#/Mnt/hd1
#/Mnt/hd2
# Etc.
/Mnt/chunk2

4. Client mounting and reading
[Root @ vm3x86_64] # scp mfs-client-1.6.27-4.x86_64.rpm 172.25.254.1:
[Root @ benberba ~] # Rpm-ivh mfs-client-1.6.27-4.x86_64.rpm
Preparing... ######################################## ### [100%]
1: mfs-client ##################################### ###### [100%]
[Root @ benberba ~] # Cd/etc/mfs/
[Root @ benberba mfs] # ls
Mfsmount. cfg. dist
[Root @ benberba mfs] # cp mfsmount. cfg. dist mfsmount. cfg
[Root @ benberba mfs] # mkdir/mnt/mfs
[Root @ benberba mfs] # vim mfsmount. cfg
Mfsmaster = mfsmaster
/Mnt/mfs
[Root @ benberba mfs] # vim/etc/hosts
172.25.254.3 mfsmaster
[Root @ benberba mfs] # mfsmount
Mfsmaster accepted connection with parameters: read-write, restricted_ip; root mapped to root: root
MFS testing:
Create two directories under the MFS mount point and set the number of file copies
[Root @ benberba mfs] # cd/mnt/mfs/
[Root @ benberba mfs] # mkdir dir1
[Root @ benberba mfs] # mkdir dir2
[Root @ benberba mfs] # mfssetgoal-r 2 dir2 sets the number of files stored in dir2 to two. The default value is one.
[Root @ benberba mfs] # cp/etc/passwd dir1
[Root @ benberba mfs] # cp/etc/passwd dir2

View File Information
[Root @ benberba mfs] # mfsfileinfo dir1/passwd
Dir1/passwd:
Chunk 0: commandid 000000000000001/(id: 1 ver: 1)
Copy 1: 172.25.254.6: 9422
[Root @ benberba mfs] # mfsfileinfo dir2/passwd
Dir2/passwd:
Chunk 0: 0000000000000002_00000001/(id: 2 ver: 1)
Copy 1: 172.25.254.5: 9422
Copy 2: 172.25.254.6: 9422

Close mfschunkserver2 and check the file information (that is, [root @ vm6mfs] # mfschunkserver stop)
[Root @ benberba mfs] # mfsfileinfo dir1/passwd
Dir1/passwd:
Chunk 0: commandid 000000000000001/(id: 1 ver: 1)
No valid copies !!!
[Root @ benberba mfs] # mfsfileinfo dir2/passwd
Dir2/passwd:
Chunk 0: 0000000000000002_00000001/(id: 2 ver: 1)
Copy 1: 172.25.254: 9422
After mfschunkserver2 is started, the file returns to normal ([root @ vm6mfs] # mfschunkserver start ).
[Root @ benberba mfs] # mfsfileinfo dir2/passwd
Dir2/passwd:
Chunk 0: 0000000000000002_00000001/(id: 2 ver: 1)
Copy 1: 172.25.254.5: 9422
Copy 2: 172.25.254.6: 9422

Restore accidentally deleted files
[Root @ benberba mfs] # rm-f dir1/passwd
[Root @ benberba mfs] # mfsgettrashtime dir1/
Dir1/: 86400
The time when the file is deleted and stored in the "garbage bin" is called the isolation time, which can be queried using the mfsgettrashtime command.
Run the mfssettrashtime command. The unit is second. The default value is 86400 seconds.
[Root @ benberba mfs] # mkdir/mnt/mfsmeta
[Root @ benberba mfs] # mfsmount-m/mnt/mfsmeta/-H mfsmaster
Mfsmaster accepted connection with parameters: read-write, restricted_ip
[Root @ benberba mfs] # cd/mnt/mfsmeta/trash
[Root @ benberba trash] # ls
00000004 | dir1 | passwd undel
[Root @ benberba trash] # mv 00000004 \ | dir1 \ | passwd undel/
The passwd file is restored in the dir1 directory.
[Root @ benberba ~] # Mfsfileinfo/mnt/mfs/dir1/passwd
/Mnt/mfs/dir1/passwd:
Chunk 0: commandid 000000000000001/(id: 1 ver: 1)
Copy 1: 172.25.254.6: 9422
In the MFSMETA directory, besides the trash and trash/undel directories, there is a third directory, reserved, which
The file has been deleted, but it has been opened by other users. After you close these opened files,
Files in the reserved directory will be deleted, and the file data will be deleted immediately. This directory cannot be operated

To safely stop the MooseFS cluster, we recommend that you perform the following steps:
# Umount-l/mnt/mfs # uninstall the MooseFS File System on the client
# Mfschunkserver stop # stop the chunk server process
# Mfsmetalogger stop # stop the metalogger Process
# Mfsmaster stop # stop the master server process
Start the MooseFS cluster safely:
# Mfsmaster start # start the master Process
# Mfschunkserver start # start the chunkserver Process
# Mfsmetalogger start # start the metalogger Process
# Mfsmount # mount the MooseFS File System on the client
In fact, no exception is found no matter how it is started or closed sequentially. After the master is started, metalogger, chunker, and client
All three elements can be automatically connected to the master.

Fault test:
The failure or network disconnection of the Client does not affect the MFS system.
If the client mistakenly kills the killall-9 mfsmount process, you must first umount/mnt/mfs and then mfsmount. Otherwise
Tip:/mnt/mfs: Transport endpoint is not connected

 

 

Mfschunkserver:


Network disconnection and killing of the mfschunkserver program have no impact on the MFS system.
Power failure:
# If no file is transmitted, neither chunker is affected;
# When a file is transferred but the file is stored in one copy, the storage of the file is not affected.
# Set two files for storage. During data transmission, switch off chunker1 and wait for the data to be transferred to start
After chunker1.chunker1 is started, data blocks are automatically copied from chunker2. File Access is not affected throughout the process.
# The file is set to be stored in two copies. During data transmission, chunker1 is switched off and started after data transmission is completed.
After chunker1.chunker1 is started, the client transmits data to chunker1, and
Create missing blocks.
As long as not two chunker servers are suspended at the same time, file transmission and service usage will not be affected.

Master:
The master service for network disconnection and MFS killing has no impact on the MFS system.
Power failure may cause the following situations:
# If there is no file transfer, run mfsmetarestore-a after the server is restarted to fix the issue, and then run
Mfsmaster start restores the master service.
# Mfsmetarestore-
Loading objects (files, directories, etc.)... OK
Loading names... OK
Loading deletion timestamps... OK
Loading chunks data... OK
Checking filesystem consistency... OK
Connecting files and chunks... OK
Store metadata into file:/var/lib/mfs/metadata. mfs
# Mfsmaster start
Working directory:/var/lib/mfs
Lockfile created and locked
Initializing mfsmaster modules...
Loading sessions... OK
Sessions file has been loaded
Exports file has been loaded
Mfstopology configuration file (/etc/mfstopology. cfg) not found-using defaults
Loading metadata...
Loading objects (files, directories, etc.)... OK
Loading names... OK
Loading deletion timestamps... OK
Loading chunks data... OK
Checking filesystem consistency... OK
Connecting files and chunks... OK
All inodes: 5
Directory inodes: 3
File inodes: 2
Chunks: 2
Metadata file has been loaded
Stats file has been loaded
Master <-> metaloggers module: listen on X: 9419
Master <-> chunkservers module: listen on *: 9420
Master server module: listen on *: 9421
Mfsmaster daemon initialized properly
# When a file is transmitted, it may be fixed in/usr/local/mfs/sbin/mfsmetarestore-:
# Mfsmetarestore-
Loading objects (files, directories, etc.)... OK
Loading names... OK
Loading deletion timestamps... OK
Loading chunks data... OK
Checking filesystem consistency... OK
Connecting files and chunks... OK
S: 115: error: 32 (Data mismatch)
In this case, the master service cannot be restored or started. An emergency solution is to copy metadata. mfs. back
Metadata. mfs, and then start the master. In this way, the data being transmitted will be lost.

This article permanently updates the link address:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.