Ceph Luminous Installation Guide

Source: Internet
Author: User
Tags ssh centos aliyun

Environment Description


Role Host Name Ip OS Hard disk
Admin Admin 10.0.0.230 CentOS 7.4
Mon & OSD & Mgr & MDS node231 10.0.0.231 CentOS 7.4 /dev/vda/dev/vdb
Mon & OSD & Mgr node232 10.0.0.232 CentOS 7.4 /dev/vda/dev/vdb
Mon & OSD & Mgr node233 10.0.0.233 CentOS 7.4 /dev/vda/dev/vdb
Client Client 10.0.0.234 CentOS 7.4

Set the host name of each server as shown in the table above.

Hostnamectl Set-hostname Host Name


The following operations need to be performed on all nodes.

Stop the firewall

Systemctl Disable FIREWALLD

Systemctl Stop Firewalld


SELinux is prohibited.

Vim/etc/selinux/config

Configured as Disabled


Configure/etc/hosts

10.0.0.230 Admin

10.0.0.231 node231

10.0.0.232 node232

10.0.0.233 node233

10.0.0.234 Client


Replace Yum source as domestic Alibaba cloud

http://blog.csdn.net/chenhaifeng2016/article/details/78864541


Adding a Ceph installation source

Vim/etc/yum.repos.d/ceph.repo

[Ceph]
Name=ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
Enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ Ceph-noarch]
name=ceph noarch packages
Baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ Ceph-source]
name=ceph Source Packages
Baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/srpms
enabled=1
gpgcheck=1
type=rpm-md
GPGKEY=HTTPS://DOWNLOAD.CEPH.COM/KEYS/RELEASE.ASC


Installing NTP

Yum Install NTP

Systemctl Enable NTPD

Systemctl Start NTPD

View NTP status

Ntpq-p


Restart all nodes

Shutdown-r Now or reboot


The following operations are only required to run on the admin node

Configure SSH password-free login

Ssh-keygen

Ssh-copy-id Admin

Ssh-copy-id node231

Ssh-copy-id node232

Ssh-copy-id node233

Ssh-copy-id Client

Installing Ceph-deploy

Yum Install Ceph-deploy


To create a configuration file directory

Mkdir-p/etc/ceph

Cd/etc/ceph

Create a Ceph cluster

Cepy-deploy New node231


Installing a Ceph binary package on all nodes

Ceph-deploy Admin node231 node232 node233 Client

Ceph-v or Ceph version



Create a Ceph MON

Ceph-deploy Mon create-initial


Create a Ceph OSD

Ceph-deploy Disk List node231

Ceph-deploy Disk Zap Node231:vdb

Ceph-deploy Disk Zap Node232:vdb

Ceph-deploy disk Zap Node233:vdb Ceph-deploy--overwrite-conf OSD Create Node231:vdb ceph-deploy--overwrite-conf OSD Crea Te node232:vdb ceph-deploy--overwrite-conf OSD Create Node233:vdb
Edit File/etc/ceph/ceph.conf Add the following public_network = 10.0.0.0/24

Copy the configuration file to each node Ceph-deploy admin client node231 node232 node233

Create Mon ceph-deploy--overwrite-conf Mon create node231 ceph-deploy--overwrite-conf admin node231
Ceph-deploy--overwrite-conf Mon Create node232 ceph-deploy--overwrite-conf admin node232
Ceph-deploy--overwrite-conf Mon Create node233 ceph-deploy--overwrite-conf admin node233
At this time, Mon and OSD were established on 3 nodes. Check the status


There are 3 Mon, 3 OSD, but the status of cluster is Health_warn, because it is no active MGR, next create Ceph Mgr
Ceph-deploy Mgr Create node231 ceph-deploy Mgr Create node232 ceph-deploy Mgr Create node233


In this step, the Ceph cluster has been installed.
Next Test the block storage, the following operations are performed on the client node.

Create a new storage pool instead of using the default RBD ceph OSD Pool Create test 128
Create a block RBD create--size 10G disk01--pool test
View RBD


View the properties of a block RBD info--pool Test disk01


Because the kernel is not supported, some features need to be suppressed, only layering is reserved

RBD--pool Test feature disable Disk01 exclusive-lock, Object-map, Fast-diff, Deep-flatten


Map Block disk01 to Local

RBD Map--pool Test disk01



Format Block devices

Mkfs.ext4/dev/rbd0



Mount the rbd0 to a local directory

Mount/dev/rbd0/mnt



This time to view the cluster status, the status of the cluster is Health_warn



Perform Ceph health Detail



Perform ceph OSD Pool application Enable test RBD as prompted


The cluster status is OK.



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.