Ceph is a software that can provide storage cluster services
它可以实现将主机做成存储集群,并提供分布式文件存储的功能ceph服务可以提供三种存储方式: 1.块存储 2.对象存储 2.文件系统存储
Here I'll show you how to build a storage cluster using Ceph: Environment Introduction:
node1 node2 node3 这三台主机作为存储集群服务器,分别都有3块10G的硬盘,并且我会将node1及作为存储服务器也作为管理主机(管理存储服务器的服务器) 将client作为访问的客户端 node1 node2 node3这第三台服务器要配置NTP同步服务,同步client的NTP服务,分别配置yum源以及node1免密以及/etc/hosts文件
1. Environment configuration, part of the simple here is omitted:
(1) Configure the/etc/hosts file to implement the local domain name resolution
[[email protected] ~]# cat /etc/hosts #切忌不要将本机的回环地址删了,只需要添加命令即可,不要删东西 192.168.4.50 client 192.168.4.51 node1 192.168.4.52 node2 192.168.4.53 node3
To make sure that the/etc/hosts files on both hosts are node2 node3 these commands are
(2) Configure the secret-free operation on the Node1 management host:
[[email protected] ~]# ssh-keygen -N ‘‘ -f /root/.ssh/id_rsa[[email protected] ~]# ssh-copy-id node1[[email protected] ~]# ssh-copy-id node2[[email protected] ~]# ssh-copy-id node3[[email protected] ~]# ssh-copy-id client
(3) Copy the Yum source to other hosts and ensure that the Yum source on each host has a package that needs to be installed
[[email protected] ~]# scp /etc/yum.repos.d/ceph.repo node2[[email protected] ~]# scp /etc/yum.repos.d/ceph.repo node3[[email protected] ~]# scp /etc/yum.repos.d/ceph.repo client
2. Create a functional directory of the storage cluster on the NODE1 management host and configure the cluster
(1) Install the package that provides storage Cluster service on Node1 Ceph-deploy.noarch
[[email protected] ~]# yum -y install ceph-deploy.noarch
(2) Create a custom directory on the Node1 management host Ceph-cluster
[[email protected] ~]# mkdir ceph-cluster [[email protected] ~]# cd ceph-cluster
(3) Specify Node1 Node2 NODE3 as storage Cluster Server, create storage Cluster Server
[[email protected] ceph-cluster]# ceph-deploy new node1 node2 node3 #这里耐心等待,会花一段时间
(4) Installing the Ceph-deploy tool for all storage cluster servers
[[email protected] ceph-cluster]# ceph-deploy install node1 node2 node3
(5) For all clusters Initialize monitor this monitor program
[[email protected] ceph-cluster]# ceph-deploy mon create-initial
(6) for Node1 Node2 Node3 will/dev/vdb divided into two districts, as the/dev/vdc/dev/vdd of these two disks log disk
[[email protected] ceph-cluster]# parted /dev/vdc mktable gpt mkpart primary 1M 50% [[email protected] ceph-cluster]# parted /dev/vdc mkpart primary 50% 100% [[email protected] ~]# parted /dev/vdc mktable gpt mkpart primary 1M 50% [[email protected] ~]# parted /dev/vdc mkpart primary 50% 100% [[email protected] ~]# parted /dev/vdc mktable gpt mkpart primary 1M 50% [[email protected] ~]# parted /dev/vdc mkpart primary 50% 100%
(7) To set the log disk attribution, to ensure that the Ceph user log in subsequent storage permissions
[[email protected] ceph-cluster]# chown ceph:ceph /dev/vdc* [[email protected] ~]# chown ceph:ceph /dev/vdc* #这些是临时设置,要想重启也有效需要写到/etc/rc.local文件下并且服务x权限 [[email protected] ~]# chown ceph:ceph /dev/vdc* 永久设置:以node2为例 [[email protected] ~]# echo "chown ceph:ceph /dev/vdc*" >> /etc/rc.local [[email protected] ~]# chmod +x /etc/rc.local
(8) Format all disks that need to be shared on the Node1 management host/DEV/VDB/DEV/VDD #这些都是在node1管理主机上操作的
[[email protected] ceph-cluster]# ceph-deploy disk zap node1:vdb node1:vdd [[email protected] ceph-cluster]# ceph-deploy disk zap node2:vdb node2:vdd [[email protected] ceph-cluster]# ceph-deploy disk zap node3:vdb node3:vdd
(9) Create the OSD disk, these are also on the management of the host to do it
[[email protected] ceph-cluster]# ceph-deploy osd create node1:vdb:/dev/vdc1 node1:vdd:/dev/vdc2 [[email protected] ceph-cluster]# ceph-deploy osd create node2:vdb:/dev/vdc1 node2:vdd:/dev/vdc2 [[email protected] ceph-cluster]# ceph-deploy osd create node3:vdb:/dev/vdc1 node3:vdd:/dev/vdc2
3.ceph storage cluster is set up, here using the ceph-s command to check the status, only OK is normal
[[email protected] ceph-cluster]# ceph -s cluster e9877c9f-d0ca-40c6-9af4-19c5c9dea10c health HEALTH_OK ....... [[email protected] ceph-cluster]# ceph osd lspools #创建存储集群后默认会为我们自动创建一个存储池 0 rbd,
Specific steps to build a Ceph storage cluster in RHEL7