Document directory
1 Preliminary Note
2 setting up the glusterfs servers
3 setting up the glusterfs Client
4 testing
This tutorial shows how to set up a high-availability storage with two storage servers (Ubuntu 9.10) that use glusterfs. each storage server will be a mirror of the other storage server, and files will be replicated automatically stored SS
############################################ Network Architecture ########################################Two servers M1 M2M1 is Glusterfs master server, IP is 192.168.1.138M2 for Glusterfs hot standby server, IP is 192.168.1.139M1 is also the client clients(i) IP settingsSlightly######################################## # # # Service Environment Installation # # ########################################(i) F
GlusterFSGlusterfs is an open-source, scale-out file system. These examples provide information about how to allow containers to use glusterfs volumes.The example assumes that you have set up the Glusterfs server cluster and is ready to use the running Glusterfs volume in the container.PrerequisiteThe Kubernetes cluster has been built.Installation of the
Ceph environment setup (2)1. There are three layout hosts: node1, node2, and node3. Each host has three osdks, as shown in figure. osd1, and 8 are SSD disks, 4 is the SATA disk. Each of the three hosts has a Monitor and an MDS. We use osd1, 3, and 4 to create a pool named ssd, in the form of three copies, osd0, and 2, 4 to build a Pool named sata, in the form of Erasure code, k = 2, m = 1, that is, use two osdks to store data fragments, one osd to sto
1. glusterfs Overview
Glusterfs is an open-source Distributed File System with powerful scale-out capability. It supports Pb storage capacity and processing of thousands of clients. Glusterfs aggregates physically distributed storage resources using TCP/IP or InfiniBand rdma networks and uses a single global namespace to manage data. Based on the stackable user s
As an architect in the storage industry, I have a special liking for file systems. These systems are used to store the user interfaces of the system. Although they tend to provide a series of similar functions, they can also provide significantly different functions. CEpH is no exception. It also provides some of the most interesting features you can find in the file system.
CEpH was initially a PhD resea
Official Document: http://docs.ceph.com/docs/master/start/quick-start-preflight/Chinese version: http://docs.openfans.org/ceph/Principle: Using the Ceph-deploy tool, through the management node Admin-node, using the SSH channel, in order to achieve the control of each distributed node storage sharing function.650) this.width=650; "Src=" http://docs.ceph.com/docs/master/_images/ Ditaa-5d5cab6fc315585e5057a74
Background Introduction:The project is currently in the file synchronization with Rsync, in the attempt to replace the distributed file system, the use of moosefs, the effect is not satisfactory, after understanding the Glusterfs, decided to try, because it compared with moosefs, feel more simple deployment, At the same time there is no meta-data server features so that it does not have a single point of failure, it feels very good.Environment Introdu
Environment Introduction:Os:centos 6.4 x86_64 MinimalServers:sc2-log1,sc2-log2,sc2-log3,sc2-log4Client:sc2-ads15
Specific steps:1. Install the Glusterfs package on Sc2-log{1-4}:
The code is as follows
Copy Code
# wget-p/ETC/YUM.REPOS.D Http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo# yum install-y
Explore Ceph file systems and ecosystemsM. Tim Jones, freelance writerIntroduction: Linux® continues to expand into scalable computing space, especially for scalable storage. Ceph recently joined the impressive file system alternatives in Linux, a distributed file system that allows for the addition of replication and fault tolerance while maintaining POSIX compatibility. Explore Ceph's architecture and lea
Build GlusterFS in CentOS 7
Lab requirements:
Install GlusterFS on four machines to form a cluster
The client stores docker registry in the file system.
The hard disk space of the four nodes is not integrated into one hard disk space. Each node must be stored in one copy to ensure data security.
Environment Planning
Server
Node1: 192.168.0.165 Host Name: glusterfs1
Node2: 192.168.0.157 Host Name: glust
CentOS7 install Ceph
1. installation environment
| ----- Node1 (mon, osd) sda is the system disk, and sdb and sdc are the osd disks.
|
| ----- Node2 (mon, osd) sda is the system disk, and sdb and sdc are the osd disks.
Admin ----- |
| ----- Node3 (mon, osd) sda is the system disk, and sdb and sdc are the osd disks.
|
| ----- Client
Ceph Monitors uses port 6789 for communication by default, and OSD uses po
OverviewDocs:http://docs.ceph.com/docsCeph is a distributed file system that adds replication and fault tolerance while maintaining POSIX compatibility. The most characteristic of Ceph is the distributed meta-data server, which distributes the file location by crush (controlled Replication under scalable Hashing) this quasi-algorithm. The core of Ceph is the Rados (reliableautonomic distributed Object Store
Use GlusterFS to create a shared file system between two servers
The problem is that you want to make the files in a directory of the two servers consistent. I have read a method, such as using Rsync to synchronize directories and NFS shared directories. The following describes how to use GlusterFS. I have created several CentOS 7 virtual machines locally.
Virtual Machine
Balancer: 192.168.33.60
Web1: 19
Ceph installation and deployment in CentOS7 Environment
Ceph Introduction
Eph is designed to build high performance, high scalibility, and high available on low-cost storage media, providing unified storage, file-based storage, block storage, and Object Storage. I recently read the relevant documentation and found it interesting. It has already provided Block Storage for openstack, which fits the mainstream
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.