Moosefs Distributed File System cluster configuration

Source: Internet
Author: User
Tags rpmbuild

1. Management Server (master-server): responsible for the management of various data storage servers, file read/write scheduling, file space recovery and recovery. multi-node copy

2. the metadata log server (changelog-server) is responsible for backing up changes to the master server (which can be put together with the Management Server). The file type is changelog_ml. *. MFs to take over the master server when a problem occurs

3. Data Storage Server (chunk-server): connects to the management server, follows the scheduling of the Management Server, provides storage space, and provides data transmission for the customer.

4 client (clients): mount the data storage server managed on the remote management server through the fuse kernel interface. It seems that the shared file system works the same way as the local UNIX file system.

Read/write principles of the MFs File System:



MFs Distributed File System Construction:

System Environment:

Rhel6.4

SELinux is disabled

Iptables is flush

I. Yum source definition, used to solve the dependency problem of software packages
# Cat yum. Repo
[Base]
Name = yum
Baseurl = ftp: // 192.168.2.234/pub/rhel6.4
Gpgcheck = 0
[Ha]
Name = ha
Baseurl = ftp: // 192.168.2.234/pub/rhel6.4/highavailability
Gpgcheck = 0
[LB]
Name = LB
Baseurl = ftp: // 192.168.2.234/pub/rhel6.4/loadbalancer
Gpgcheck = 0
[Storage]
Name = ST
Baseurl = ftp: // 192.168.2.234/pub/rhel6.4/resilientstorage
Gpgcheck = 0
[SFS]
Name = FS
Baseurl = ftp: // 192.168.2.234/pub/rhel6.4/scalablefilesystem
Gpgcheck = 0
Ii. Host resolution Preparation
# Cat/etc/hosts
192.168.2.88 node1 mfsmaster
192.168.2.89 node2
192.168.2.90 node3
192.168.2.82 node4
192.168.2.85 node5
The experiment uses node1 as the master-Server
Node3 and node4 serve as chunk-Server
Node5 as clients
All nodes must have the above preparations

Iii. Installation preparation
# Yum install rpm-build GCC make fuse-devel zlib-devel-y install the dependencies used by the compiling environment (in fact, the installation will be prompted during the installation process)
# Rpmbuild-TB mfs-1.6.27.tar.gz build GZ package into RPM package note: the package format is very important (only support for large versions)


# Ls/root/rpmbuild/RPMS/x86_64/generated RPM package
Mfs-cgi-1.6.27-2.x86_64.rpm mfs-client-1.6.27-2.x86_64.rpm
Mfs-cgiserv-1.6.27-2.x86_64.rpm mfs-master-1.6.27-2.x86_64.rpm
Mfs-chunkserver-1.6.27-2.x86_64.rpm mfs-metalogger-1.6.27-2.x86_64.rpm

1. Master-Server installation:
# Yum localinstall mfs-cgi-1.6.27-2.x86_64.rpm mfs-master-1.6.27-2.x86_64.rpm-y
You can use CGI for page monitoring.

Master-server: main files and directories
/Var/lib/MFs data directory
Metadata. MFs Startup File
/Etc/MFs main directory (storing configuration files)
Mfsmaster. cfg MFs main configuration file (defining related parameters, user, group, and other settings)
Mfsexports. cfg MFs mounted directory and its permission Control File
Mfstopology. cfg file that defines the MFs network topology

The configuration file can be used without modification by default.
# Chown-r nobody/var/lib/MFs pay attention to give the data directory the MFs permission

# Mfsmaster start MFs
# Mfsmaster stop disable MFs

# Netstat-antlpe (mfsmaster enables three ports: Client Connection port 9421, listening port 9422, and data node port 9420)

#/Usr/share/mfscgi
# Chmod + x *. cgi: grant executable permissions to all CGI pages (to check the status on the Web)
# Mfscgiserv --------start CGI monitoring

Http: // 192.168.2.88: 9425/
View MFs monitoring information

2. Chunk-Server Installation configuration (node3 and node4)
# Rpm-IVH mfs-chunkserver-1.6.27-2.x86_64.rpm
# Cd/etc/MFs/
# Cp mfschunkserver. cfg. Dist mfschunkserver. cfg
# Cp mfshdd. cfg. Dist mfshdd. cfg
# Vim mfshdd. cfg storage file
/Mnt/chunk directory (store client/mnt/MFs files)

# Mkdir/mnt/chunk
# Mkdir/var/lib/MFs
# Chown nobody/var/lib/MFs/
# Chown nobody/mnt/chunk

# Mfschunkserver start the MFs server (note that the mfsmaster resolution must be in place)

# L. Generate a hidden lock file
. Mfschunkserver. Lock


3. install and configure the clients client;
# Yum localinstall mfs-client-1.6.27-2.x86_64.rpm
# Cp mfsmount. cfg. Dist mfsmount. cfg
# Vim mfsmount. cfg
Modify master and distributed directories/mnt/MFs
# Mkdir/mnt/MFs
# Mfsmounts mount the client
Mfsmaster accepted connection with parameters: read-write, restricted_ip; root mapped to root: mounted successfully

# DF view mounted devices
Mfsmaster: 9421 6714624 0 6714624 0%/mnt/MFs

# Ll-D/mnt/MFs/automatic read/write after mounting
Drwxrwxrwx 2 root Root 0 Jun 8 10:29/mnt/MFs/




Test: MFs test:
# Mkdir Hello {1, 2}
# Ls
Hello1 hello2

# Mfsdirinfo hello1/
Hello1 /:
Inodes: 1
Directories: 1
Files: 0
Chunks: 0
Length: 0
Size: 0
Realsize: 0

# Mfssetgoal-R 3 hello1/set the number of backups
Hello1 /:
Inodes with Goal changed: 1
Inodes with goal not changed: 0
Inodes with permission denied: 0

# Mfsgetgoal hello1/view the number of file backups
Hello1/: 3
# Mfsgetgoal hello2
Hello2: 1

# Cp/etc/fstab hello1/
# Cp/etc/passwd hello2/

# Mfsfileinfo/Hello/fstab
Fstab:
Chunk 0: 000000000000000b_00000001/(ID: 11 Ver: 1)
Copy 1: 192.168.2.82: 9422
Copy 2: 192.168.2.90: 9422

# Mfscheckfile passwd

Test the storage relationship:

# Mfsfileinfo fstab
Fstab:
Chunk 0: 000000000000000b_00000001/(ID: 11 Ver: 1)
Copy 1: 192.168.2.90: 9422
[[Email protected] hello1] # mfsfileinfo ../hello2/passwd
../Hello2/passwd:
Chunk 0: 000000000000000c_00000001/(ID: 12 Ver: 1)
No valid copies !!!


Client: accidentally delete the file (delete/mnt/MFs/Hello */passwd)
# Mfsmount-M/mnt/test/-H mfsmaster Restore directory mounted to mfsmaster
Mfsmaster accepted connection with parameters: read-write, restricted_ip
# Mount

# Cd/mnt/test/
# Mfscheckfile passwd
# Music Video 00000005 \ | hello2 \ | passwd undel/
Directly restore to the previous MFs directory
# Umount/mnt/META/


Mfschunk-server can automatically detect the client configuration file:
# Mfschunkserver stop

Copy the file again on the client,
# Cp/etc/inittab/mnt/MFs/hello1
# Mfsgetgoal hello1/fstab
# Mfsgetgoal hello1/inittab

# Mfsfileinfo inittab has only one chukserver at the beginning, and only one copy can be saved

Enable chunkserver
# Mfschunkserver

# Mfsfileinfo inittab: Check the number of backups in the file and restore it to the number of chunkserver.
Inittab:
Chunk 0: 0000000000000006_00000001/(ID: 6 Ver: 1)
Copy 1: 192.168.2.184: 9422
Copy 2: 192.168.2.185: 9422


Note:
In mfsmaster, the data file is metadata. MFs. Back during normal operation.
When the host fails, the data file is saved as metadata. MFs

If you disable it abnormally (kill-9 PID), the data file will not be restored.

# Mfsmetarestore-a-the metadata. MFs file will be lost after the startup is abnormal and must be restored

Then restart mfsmaster (the metadata. MFs file is required to start mfsmaster)


This article is from the "O & M" blog. For more information, contact the author!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.