Chapter 4 Distributed (network) storage systems

Source: Internet
Author: User
Tags glusterfs gluster

Chapter 4 Distributed (network) storage systems

4.1 Glusterfs Introduction

There is no detailed description of how Glusterfs works, and if interested can go to the official website to learn more about the principles and deployment.


For this test environment, the underlying distributed storage system primarily provides services for OpenStack services with storage requirements, including mirroring services, shared storage for compute services (for dynamic migrations), block storage, and object storage (not covered first).


The user has implemented the deployment of NFS, MFS, Glusterfs, but discusses performance, environmental factors, and finally chooses Glusterfs as the storage system, in the following ways:

Performance Comparison: NFS is a traditional network file system, the shortcomings are obvious, such as a single point of failure, metadata and data transmission is not separated by the performance bottleneck, in the newer version of Pnfs (that is, NFSv4.1) has been separated, allowing the client to directly interact with the data server. This mechanism solves the performance bottleneck problem of the traditional NFS, which makes the system obtain high performance and high expansibility. MFS provides a powerful feature for small file reads and writes, and the metadata server is deployed separately without affecting the transmission of file data, as well as the single point of failure of MFS There is no single point of failure for Glusterfs. Instead of MDS with a dynamic algorithm running on each node, there is no need to synchronize metadata, no hard disk I/O bottlenecks, but only for large files, no advantage for small file transfers

Operation Comparison: NFS deployment is simple, but pay attention to the issue of permissions when publishing a shared path, in addition, the most important is the single point of failure of the hidden trouble, so do not consider the use of NFS;MFS architecture is very robust, the same single point of failure can be deployed Master/slave mode to solve the problem, However, as the overall architecture required more nodes are not considered here; Glusterfs is the concept of no s/c, multiple nodes through the configuration also realize the replication function to solve the problem, scalability is also very good


In fact, which distributed storage system to use as the underlying shared storage due to the actual environmental needs, in consideration of resource issues and no two development requirements, the use of Glusterfs is the simplest.


Architecture topology diagram given by Red Hat website:

Note that compute nodes are also used to shared storage, and are given.


4.2 Glusterfs Installation Deployment

Based on the previous server topology, two computer points also provide the ability to publish shared storage, where a 600G hard disk is mounted on each node, providing shared storage for mirroring services, compute services, and block storage services.


4.2.1 Formatting mounted Storage

The following steps require operation on two compute nodes, with compute1 (10.0.0.31) as an example:

[Email protected] ~]# Fdisk/dev/sdb

# Create the primary partition SDB1, and set the file system identifier to 8e, i.e. LVM, so that the future expansion needs, here the specific operation is not described

[Email protected] ~]# PVCREATE/DEV/SDB1

[Email protected] ~]# vgcreate VG_SHARING_DISK/DEV/SDB1

[Email protected] ~]# lvcreate-l 100%free-n lv_sharing_disk vg_sharing_disk

[Email protected] ~]# Mkfs.xfs/dev/vg_sharing_disk/lv_sharing_disk

[Email protected] ~]# MKDIR/RHS

[Email protected] ~]# Vi/etc/fstab

/DEV/VG_SHARING_DISK/LV_SHARING_DISK/RHS XFS Defaults 1 1

[Email protected] ~]# MOUNT/RHS


4.2.2 Configuring startup Glusterfs shared Storage

1. Install the Glusterfs service and set it to boot from:

The following steps require operation on two compute nodes:

[Email protected] ~]# yum-y install Glusterfs-server

[Email protected] ~]# systemctl start Glusterd

[Email protected] ~]# Systemctl enable Glusterd


2. Set up the trust relationship and publish the shared storage service:

operate on a single compute node:

[Email protected] ~]# Gluster peer probe Compute2

Peer Probe:success.

[Email protected] ~]# Gluster peer status

Number of Peers:1


Hostname:compute2

uuid:a882ef3c-cbdf-45ed-897c-1fbb942d0b5e

State:peer in Cluster (Connected)

Note that it is important to ensure that two nodes are available for host name resolution, and that there was an error when deploying the storage server separately in the previous test because the host name was not added to the resolution scope and caused other nodes to mount the Glusterfs file system.


Create the appropriate directory and specify users and groups for the shared storage service that you want to use in OpenStack:

[Email protected] ~]# Mkdir/rhs/glance-vol/rhs/nova-vol/rhs/cinder-vol

[[email protected] ~]# gluster volume create Glance-volume replica 2 compute1:/rhs/glance-vol/compute2:/rhs/glance-vol/

[[email protected] ~]# gluster volume create Nova-volume replica 2 compute1:/rhs/nova-vol/compute2:/rhs/nova-vol/

[[email protected] ~]# gluster volume create Cinder-volume replica 2 compute1:/rhs/cinder-vol/compute2:/rhs/cinder-vol/

[[email protected] ~]# Gluster Volume set Glance-volume storage.owner-uid 161

[[email protected] ~]# Gluster Volume set Glance-volume storage.owner-gid 161

[[email protected] ~]# Gluster Volume set Nova-volume storage.owner-uid 162

[[email protected] ~]# Gluster Volume set Nova-volume storage.owner-gid 162

[[email protected] ~]# Gluster Volume set Cinder-volume storage.owner-uid 165

[[email protected] ~]# Gluster Volume set Cinder-volume storage.owner-gid 165

[Email protected] ~]# Gluster volume start Glance-volume

[Email protected] ~]# Gluster volume start Nova-volume

[Email protected] ~]# Gluster volume start Cinder-volume

Note that the user name and group of the volume refer to the default settings for the OpenStack service:

Http://docs.openstack.org/kilo/install-guide/install/yum/content/reserved_user_ids.html


At this point, the shared storage service has been configured, and the section on shared storage in the following sections will be emphasized as practical as necessary.


This article is from the "Technology House" blog, please be sure to keep this source http://8497595.blog.51cto.com/8487595/1687648

Chapter 4 Distributed (network) storage systems

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.