1. Introduction
Ceph is a unified, distributed storage system designed for outstanding performance, reliability, and scalability. It also provides three functions for object storage, block storage, and file system storage to simplify deployment and operations while meeting the needs of different applications.
2. Installation Preparation
- Note: The following command may appear in character format when pasting the copy, if the command prompt cannot find the command, you can enter it manually, there are other errors to search the online solution or reply to comments.
- Installation system for CentOS 7
- Prepare 5 hosts (virtual host or physical machine can be, IP is the IP address of the operating machine)
172.18.16.200 as a management node Admin-node
172.18.16.201 as a monitoring node mon-node1
172.18.16.202 as a OSD1 storage node Node2
172.18.16.203 as a OSD2 storage node Node3
172.18.16.204 as a OSD3 storage node Node4
- Modify the Hosts file for Admin-node
Vi/etc/hosts
Add the following content
172.18.16.201 Mon-node1
172.18.16.202 Node2
172.18.16.203 Node3
172.18.16.204 Node4
- Modify the host name for each host
Hostnamectl set-hostname "new name"
This article uses the above named Admin-node, Mon-node1, Node2, Node3, Node4
- Create a Ceph user on each host
Create user
sudo adduser-d/home/ceph-m ceph
Set Password
sudo passwd ceph
Set account Permissions
echo "Ceph all = (root) nopasswd:all" | sudo tee/etc/sudoers.d/ceph
sudo chomod 0440/etc/sudoers.d/ceph
Execute Command Visudo
Change Defaults Requiretty This line to modify Defaults:ceph! Requiretty
- Shutting down firewalls and SELinux
Firewall shutdown and disable
Systemctl Stop firewalld.service# stops firewall
Systemctl Disable firewalld.service# disable firewall boot boot
Turn off SELinux
Vi/etc/selinux/config, set SELinux to Disabled
- Upgrading the CENTOS7 kernel
View current kernel version
Uname–r
Import Key
rpm --
import
https:
//www
.elrepo.org
/RPM-GPG-KEY-elrepo
.org
Installing the Elrepo Yum source
rpm -Uvh http:
//www
.elrepo.org
/elrepo-release-7
.0-2.el7.elrepo.noarch.rpm
安装内核
yum --enablerepo=elrepo-kernel
install
kernel-ml-devel kernel-ml –y
查看默认内核启用版本
awk
-F\
‘ ‘
$1==
"menuentry "
{print $2}‘
/etc/grub2
.cfg
设置默认启动
grub2-
set
-default 0
重启系统
reboot
在admin-node节点切换到ceph用户
su ceph
执行命令 ssh-keygen(一路回车就行)
将上一步的key复制到其他节点
Ssh-copy-id [email protected]
Ssh-copy-id [email protected]
Ssh-copy-id [email protected]
Ssh-copy-id [email protected]
- Modify config file (admin-node node operation)
VI ~/.ssh/config (SSH localhost needs to be executed first)
Add the following content
Host Mon-node1
Hostname 172.18.16.201
User Ceph
Host Node2
Hostname 172.18.16.202
User Ceph
Host Node3
Hostname 172.18.16.203
User Ceph
Host Node4
Hostname 172.18.16.204
User Ceph
- Installing Ceph-deploy for Admin-node nodes
Step one: Add Yum config file
sudo vim/etc/yum.repos.d/ceph.repo
Add the following content:
[Ceph-noarch]
Name=ceph Noarch Packages
Baseurl=http://ceph.com/rpm-firefly/el7/noarch
Enabled=1
Gpgcheck=1
Type=rpm-md
Gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
Step two: Update the software source and follow Ceph-deploy, time synchronization software
sudo yum update && sudo yum install Ceph-deploy
sudo yum install NTP ntpupdate ntp-doc
- Create a directory under the Ceph user of the Admin-node node
mkdir My-cluster
CD My-cluster
- Install Ceph for each node
Create a Mon node
Ceph-deploy New Mon-node1
Installing Ceph on a node with Ceph-deploy
Ceph-deploy Install Admin-node mon-node1 node2 node3 node4
Initialize the monitoring node and collect the keyring:
Ceph-deploy Mon create-initial
To allocate disk space for the storage node OSD process:
SSH Node2
sudo mkdir/var/local/osd0
Exit
To allocate disk space for the storage node OSD process:
SSH Node3
sudo mkdir/var/local/osd1
Exit
To allocate disk space for the storage node OSD process:
SSH node4
sudo mkdir/var/local/osd2
Exit
- The other node OSD process is opened and activated by the ceph-deploy of the Admin-node node.
Ceph-deploy OSD Prepare node2:/var/local/osd0
Ceph-deploy OSD Activate node2:/var/local/osd0
Node3, node4 similar operations
- Synchronize the configuration file of the Admin-node node with the keyring to the other nodes:
Ceph-deploy admin Admin-node mon-node1 node2 node3 node4
sudo chmod +r/etc/ceph/ceph.client.admin.keyring
Cluster installation status
Ceph–s
Cluster health status
Ceph Health
If successful will prompt: HEALTH_OK
ceph< > Installation