ceph< > Installation

Source: Internet
Author: User

1. Introduction

Ceph is a unified, distributed storage system designed for outstanding performance, reliability, and scalability. It also provides three functions for object storage, block storage, and file system storage to simplify deployment and operations while meeting the needs of different applications.

2. Installation Preparation

    • Note: The following command may appear in character format when pasting the copy, if the command prompt cannot find the command, you can enter it manually, there are other errors to search the online solution or reply to comments.
    • Installation system for CentOS 7
    • Prepare 5 hosts (virtual host or physical machine can be, IP is the IP address of the operating machine)

172.18.16.200 as a management node Admin-node

172.18.16.201 as a monitoring node mon-node1

172.18.16.202 as a OSD1 storage node Node2

172.18.16.203 as a OSD2 storage node Node3

172.18.16.204 as a OSD3 storage node Node4

    • Modify the Hosts file for Admin-node

Vi/etc/hosts

Add the following content

172.18.16.201 Mon-node1

172.18.16.202 Node2

172.18.16.203 Node3

172.18.16.204 Node4

    • Modify the host name for each host

Hostnamectl set-hostname "new name"

This article uses the above named Admin-node, Mon-node1, Node2, Node3, Node4

    • Create a Ceph user on each host

Create user

sudo adduser-d/home/ceph-m ceph

Set Password

sudo passwd ceph

Set account Permissions

echo "Ceph all = (root) nopasswd:all" | sudo tee/etc/sudoers.d/ceph

sudo chomod 0440/etc/sudoers.d/ceph

Execute Command Visudo

Change Defaults Requiretty This line to modify Defaults:ceph! Requiretty

    • Shutting down firewalls and SELinux

Firewall shutdown and disable

Systemctl Stop firewalld.service# stops firewall
Systemctl Disable firewalld.service# disable firewall boot boot
 
  
Turn off SELinux
Vi/etc/selinux/config, set SELinux to Disabled
    • Upgrading the CENTOS7 kernel

View current kernel version

Uname–r

Import Key

rpm --importhttps://www.elrepo.org/RPM-GPG-KEY-elrepo.org

Installing the Elrepo Yum source

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

安装内核

yum --enablerepo=elrepo-kernel installkernel-ml-devel kernel-ml –y

查看默认内核启用版本

awk-F\‘ ‘$1=="menuentry "{print $2}‘ /etc/grub2.cfg

设置默认启动

grub2-set-default 0

重启系统

reboot

    • SSH Password-free login

在admin-node节点切换到ceph用户

su ceph

执行命令   ssh-keygen(一路回车就行)

将上一步的key复制到其他节点

Ssh-copy-id [email protected]

Ssh-copy-id [email protected]

Ssh-copy-id [email protected]

Ssh-copy-id [email protected]

    • Modify config file (admin-node node operation)

VI ~/.ssh/config (SSH localhost needs to be executed first)

Add the following content

Host Mon-node1

Hostname 172.18.16.201

User Ceph

Host Node2

Hostname 172.18.16.202

User Ceph

Host Node3

Hostname 172.18.16.203

User Ceph

Host Node4

Hostname 172.18.16.204

User Ceph

    • Installing Ceph-deploy for Admin-node nodes

Step one: Add Yum config file

sudo vim/etc/yum.repos.d/ceph.repo

Add the following content:

[Ceph-noarch]
Name=ceph Noarch Packages
Baseurl=http://ceph.com/rpm-firefly/el7/noarch
Enabled=1
Gpgcheck=1
Type=rpm-md
Gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Step two: Update the software source and follow Ceph-deploy, time synchronization software

sudo yum update && sudo yum install Ceph-deploy

sudo yum install NTP ntpupdate ntp-doc

    • Create a directory under the Ceph user of the Admin-node node

mkdir My-cluster

CD My-cluster

    • Install Ceph for each node

Create a Mon node

Ceph-deploy New Mon-node1

Installing Ceph on a node with Ceph-deploy

Ceph-deploy Install Admin-node mon-node1 node2 node3 node4

Initialize the monitoring node and collect the keyring:

Ceph-deploy Mon create-initial

To allocate disk space for the storage node OSD process:

SSH Node2

sudo mkdir/var/local/osd0

Exit

To allocate disk space for the storage node OSD process:

SSH Node3

sudo mkdir/var/local/osd1

Exit

To allocate disk space for the storage node OSD process:

SSH node4

sudo mkdir/var/local/osd2

Exit

    • The other node OSD process is opened and activated by the ceph-deploy of the Admin-node node.

Ceph-deploy OSD Prepare node2:/var/local/osd0

Ceph-deploy OSD Activate node2:/var/local/osd0

Node3, node4 similar operations

    • Synchronize the configuration file of the Admin-node node with the keyring to the other nodes:

Ceph-deploy admin Admin-node mon-node1 node2 node3 node4

sudo chmod +r/etc/ceph/ceph.client.admin.keyring

    • Check cluster status

Cluster installation status

Ceph–s

Cluster health status

Ceph Health

If successful will prompt: HEALTH_OK

ceph< > Installation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.