Linux build MFS Distributed File system

Source: Internet
Author: User
Tags curl gpg

Description

Architecture Planning

Meta Data server Mfs-master-1 172.16.100.2

Backup Server Mfs-metalogger 172.16.100.4

Data Storage Server Mfs-chunkserver-1 172.16.100.5

Data Storage Server Mfs-chunkserver-2 172.16.100.6

Data Storage Server Mfs-chunkserver-3 172.16.100.7

Node Description:

Master Server:

Because master server controls the individual components in the entire moosefs and is responsible for providing services externally, we must ensure that master server is in a very stable state. For example, for Master server dual-supply dual-channel configuration, multiple disks using RAID1 or RAID10, redundant. As mentioned earlier, Master Server puts all the metadata information that is accessed in memory and provides user access. As a result, memory usage increases as the number of files increases. According to the official data, 1 million files chunk information, probably need 300M of memory space to do. For disks, Master Server does not use a large amount of disk, depending on the number of files and chunk blocks used (recorded in the master metadata file) and the number of actions taken on the file (recorded in the metadata change log), typically 20G can be used to store information 2500 Thousands of file change records up to 50 hours. As a result, the amount of memory in the master Server is a big priority.

Metalogger Server:

In Moosefs's design, although Metalogger server is simply a backup to collect metadata for Moosefs primary server (changes in file changes), the hardware requirements should not be higher than the backup of the primary server. It is important to note, however, that if the master server is not highly available, we need to enable Metalogger server to replace the primary server after the primary is down. Therefore, in this regard, Metalogger server is at least the same configuration as the Master server, remember!

Chunk Server:

For Chunk Server, it is a true carrier of data storage. Therefore, our requirements for it is a lot more simple, as long as the performance of the hard drive can be guaranteed. If it is ordinary business, you can choose more than one disk to do RAID5, of course, RAID0 or RAID10 can be. It is important to note that due to the problem of the default load balancing algorithm for moosefs, I recommend that the disk sizes of all Chunk servers remain consistent. This way, we can guarantee that moosefs in the process of use, the data usage of each Chunk node is roughly consistent. Otherwise, Chunk server usage with large disk capacity is increased, and the usage of Chunk server with small disk capacity becomes smaller. Remember, remember! Of course, if the company employees have the ability, you can also moosefs load balancing algorithm in each of the Carry variable increment algorithm this part of the improvement, to avoid the shortcomings of the default algorithm, so that the storage data can be distributed evenly across the Chunk Server.




Deployment:

Deploy Master Server

1. Parameter Introduction

--disable-mfsmaster # not created as a Management Server (for pure node installation)

--disable-mfschunkserver # do not create data storage Chunkserver Server

--disable-mfsmount # do not create Mfsmount and mfstools (they will be created by default if installed with a development package)

--enable-mfsmount # OK Install Mfsmount and mfstools (if

--prefix=directory # Lock the installation directory (default is/usr/local)

--sysconfdir=directory # Select Profile Directory (default is ${PREFIX}/ETC))

--localstatedir=directory # Select Variable Data directory (default is ${PREFIX}/VAR,MFS metadata is stored in the MFS subdirectory, default is ${prefix}/var/mfs)

--with-default-user # User running the daemon, default to nobody user if no user is set in the configuration file

--with-default-group # User group running the daemon, default to Nogroup user group if no user group is set in the configuration file


2. Install Master Server

Yum Install Zlib-devel-y

Groupadd-g MFS

Useradd-u 1000-g mfs-s/sbin/nologin MFS

Cd/usr/local/src

wget http://moosefs.org/tl_files/mfscode/mfs-1.6.27-5.tar.gz

Tar zxf mfs-1.6.27-5.tar.gz

CD mfs-1.6.27

./configure--prefix=/usr/local/mfs-1.6.27--with-default-user=mfs--with-default-group=mfs-- Disable-mfschunkserver--disable-mfsmount

Make

Make install

Ln-s/usr/local/mfs-1.6.27/usr/local/mfs


Official documents:

Https://moosefs.com/download/centosfedorarhel.html

Version 6:

Curl "Http://ppa.moosefs.com/MooseFS-3-el6.repo" >/etc/yum.repos.d/moosefs.repo

Curl "Http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" >/etc/pki/rpm-gpg/rpm-gpg-key-moosefs

Version 7:

Curl "Http://ppa.moosefs.com/MooseFS-3-el7.repo" >/etc/yum.repos.d/moosefs.repo

--------------------------------------------------------------

Install Master Server

Yum install moosefs-master moosefs-cli moosefs-cgi Moosefs-cgiserv

/etc/init.d/moosefs-master start

View logs, ports, firewall open 9419 9420 9421 ports

--------------------------------------------------------------


Installing Chunkservers

Yum Install Moosefs-chunkserver


Installing Metaloggers

Yum Install Moosefs-metalogger


Installing clients

Yum Install Moosefs-client


Automatic mount on Boot:

Yum Install fuse

Vim/etc/fstab

Mfsmount/mnt/mfs Fuse Defaults 0 0

Mfsmaster.host.name:/mnt/mfs moosefs defaults 0 0


Start the service

Service Moosefs-master Start

Service Moosefs-chunkserver Start


Write file test


This article is from the "Liu" blog, make sure to keep this source http://caicai2009.blog.51cto.com/3678925/1942811

Linux build MFS Distributed File system

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.