Installing a Ceph storage cluster under CentOS 7

Source: Internet
Author: User
Tags gpg free ssh aliyun

Directory

    • First, prepare the machine
    • Second, ceph node installation
    • Third, build a cluster
    • Fourth. expansion of clusters (expansion)

First, prepare the machine

技术分享图片

Altogether 4 machines, of which 1 are management nodes, the other 3 are CEPH nodes:


hostname IP role Description
Admin-node 192.168.0.130 Ceph-deploy Management node
Node1 192.168.0.131 Mon.node1 Ceph node, monitoring node
Node2 192.168.0.132 osd.0 Ceph node, OSD node
Node3 192.168.0.133 Osd.1 Ceph node, OSD node


Management node: Admin-node

Ceph node: Node1, Node2, Node3

All nodes: Admin-node, Node1, Node2, Node3


1. Modify Host Name
# vi /etc/hostname
2. Modify the Hosts file
# vi /etc/hosts
192.168.0.130 admin-node
192.168.0.131 node1
192.168.0.132 node2
192.168.0.133 node3
3. Ensure connectivity (Management node)

Use ping short hostname (HOSTNAME-S) to confirm network connectivity. Resolves an issue that may exist for host name resolution.

$ ping node1
$ ping node2
$ ping node3

Second, ceph node installation

 1. Installing the NPT (all nodes)

We recommend that you install the NTP service (especially the Ceph Monitor node) on all CEPH nodes to avoid failure due to clock drift, see clock for details.

# sudo yum install ntp ntpdate ntp-doc
2. Install SSH (all nodes)
# sudo yum install openssh-server


3. Create a user who deploys CEPH (all nodes)



The Ceph-deploy tool must log on to the Ceph node as a normal user, and this user has permission to use sudo without a password because it requires no password to be entered during the installation of the software and configuration files.
It is recommended that you create a specific user for Ceph-deploy on all CEPH nodes within the cluster, but do not use the name "Ceph".

1) Create a new user on each Ceph node

$ sudo useradd -d /home/zeng -m zeng
$ sudo passwd zeng

2) Ensure that newly created users on each Ceph node have sudo permissions

$ echo "zeng ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/zeng
$ sudo chmod 0440 /etc/sudoers.d/zeng
4. Allow password-free SSH login (Management node)

Because Ceph-deploy does not support entering a password, you must generate an SSH key on the management node and distribute its public key to each ceph node. Ceph-deploy will attempt to generate an SSH key pair for the initial monitors.

1) Generate SSH key pair

Do not use sudo or root users. When prompted "Enter passphrase", the direct carriage return, password is empty:


/ / Switch users, if not specified, the subsequent operations are carried out under the user
# su zeng

/ / Generate a key pair
$ ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/home/zeng/.ssh/id_rsa):
Created directory ‘/home/zeng/.ssh‘.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/zeng/.ssh/id_rsa.
Your public key has been saved in /home/zeng/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256: Tb0VpUOZtmh+QBRjUOE0n2Uy3WuoZVgXn6TBBb2SsGk zeng@admin-node
The key‘s randomart image is:
+---[RSA 2048]----+
| .+@=OO*|
| *.BB@=|
| ..O+Xo+|
| o E+O.= |
| S oo=.o |
| .. . |
| . |
| |
| |
+----[SHA256]-----+

2) Copy the public key to each Ceph node

$ ssh-copy-id zeng@node1
$ ssh-copy-id zeng@node2
$ ssh-copy-id zeng@node3

When done, under the/home/zeng/.ssh/path:

    • Admin-node more documentsid_rsa,id_rsa.pubandknown_hosts;
    • Node1, Node2, node3 more filesauthorized_keys.

3) Modify the ~/.ssh/config file

Modify the ~/.ssh/config file (not added) so that Ceph-deploy can log in to the Ceph node with the username you have built.

// must use sudo
$ sudo vi ~/.ssh/config

Host admin-node
   Hostname admin-node
   User zeng
Host node1
   Hostname node1
   User zeng
Host node2
   Hostname node2
   User zeng
Host node3
   Hostname node3
   User zeng

4) test whether SSH is successful

$ ssh zeng@node1
$ exit
$ ssh zeng@node2
$ exit
$ ssh zeng@node3
$ exit
  • Issue: If "bad owner or Permissions On/home/zeng/.ssh/config" appears, execute the command to modify the file permissions.
$ sudo chmod 644 ~/.ssh/config
5. Boot-time Networking (Ceph node)

Ceph's OSD processes interconnect through the network and escalate their status to Monitors. If the network defaults to OFF, the Ceph cluster will not be online at startup until you open the network.

$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s3

/ / Make sure ONBOOT is set to yes
6. Ports required for opening (Ceph node)

The 6789-port communication is used by default between Ceph Monitors, which is used by default with 6,800:7,300 ports in this range between the OSD. The Ceph OSD can use multiple network connections for replication and heartbeat communication with clients, monitors, and other OSD.

$ sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
// or turn off the firewall
$ sudo systemctl stop firewalld
$ sudo systemctl disable firewalld
7. Terminal (TTY) (Ceph node)

You may get an error when executing the Ceph-deploy command on CentOS and RHEL. If your Ceph node is set to Requiretty by default, execute the



$ sudo visudo

Find the Defaults requiretty option, change it to Defaults:ceph!requiretty or comment directly, so ceph-deploy can connect with the previously created user (the user who created the Ceph deployment).

When editing a profile/etc/sudoers, you must use sudo visudo instead of a text editor.

8. Turn off SELinux (Ceph node)


$ sudo setenforce 0

To make the SELinux configuration permanent (if it is indeed the source of the problem), modify its configuration file/etc/selinux/config:



$ sudo vi /etc/selinux/config

Modify selinux=disabled.

9. Configuring the Epel Source (Management node)




$ sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && 
sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/
yum.repos.d/dl.fedoraproject.org*
10. Add the package source to the software Library (management node)
 








$ sudo vi /etc/yum.repos.d/ceph.repo

Paste the following content into the/etc/yum.repos.d/ceph.repo file.


[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
11. Update the Software library and install the Ceph-deploy (Management node)

$ sudo yum update && sudo yum install ceph-deploy
$ sudo yum install yum-plugin-priorities

Time may be longer, wait patiently.


Third, build a cluster

Perform the following steps under the Management node :

1. Installation preparation, creating folders

Create a directory on the management node that holds the Ceph-deploy generated configuration file and key pair.


$ cd ~
$ mkdir my-cluster
$ cd my-cluster

Note: If you are having trouble installing ceph, you can use the following command to clear the package and configure it:


// remove the installation package
$ ceph-deploy purge admin-node node1 node2 node3

// clear the configuration
$ ceph-deploy purgedata admin-node node1 node2 node3
$ ceph-deploy forgetkeys
2. Creating clusters and monitoring nodes

To create a cluster and initialize the monitoring node :



$ ceph-deploy new {initial-monitor-node(s)}

Here Node1 is the monitor node, so execute:



$ ceph-deploy new node1

After completion, My-clster more than 3 files:ceph.conf,ceph-deploy-ceph.logandceph.mon.keyring.

  • Issue: If "[Ceph_deploy][error] runtimeerror:remote connection got closed, ensure isrequirettydisabled for Node1" appears, perform sudo vis Udo will comment out Defaults Requiretty.
3. Modify the configuration file


$ cat ceph.conf

The contents are as follows:





[global]
fsid = 89933bbb-257c-4f46-9f77-02f44f4cc95c
mon_initial_members = node1
mon_host = 192.168.0.131
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

Change the default number of replicas in the Ceph configuration file from 3 to 2, so that only two OSD can reach active + clean status. Add the OSD pool default size = 2 to the [Global] segment:



$ sed -i ‘$a\osd pool default size = 2‘ ceph.conf

If you have more than one network card,
The public network can be written to the [global] segment of the Ceph configuration file:



public network = {ip-address}/{netmask}
4. Installing Ceph

To install Ceph on all nodes:



$ ceph-deploy install admin-node node1 node2 node3
  • Issue: [Ceph_deploy][error] runtimeerror:failed to execute command:yum-y install Epel-release

Workaround:

sudo yum -y remove epel-release
5. Configure the initial monitor (s), and collect all keys


$ ceph-deploy mon create-initial

These key rings should appear in the current directory after you have completed the above actions:


{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
{cluster-name}.bootstrap-rgw.keyring
6. Add 2 OSD

1) Log in to the Ceph node and create a directory for the OSD daemon and add permissions.


$ ssh node2
$ sudo mkdir /var/local/osd0
$ sudo chmod 777 /var/local/osd0/
$ exit

$ ssh node3
$ sudo mkdir /var/local/osd1
$ sudo chmod 777 /var/local/osd1/
$ exit

2) Then, execute Ceph-deploy from the management node to prepare the OSD.



$ ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

3) Finally, activate the OSD.



$ ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
7. Copy the configuration file and admin key to the management node and the Ceph node


$ ceph-deploy admin admin-node node1 node2 node3
8. Make sure you have the right permissions for ceph.client.admin.keyring


$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
9. Check the health status of the cluster and the status of the OSD node
 

[email protected]$ ceph health
HEALTH_OK

[email protected]$ ceph -s
    cluster a3dd419e-5c99-4387-b251-58d4eb582995
     health HEALTH_OK
     monmap e1: 1 mons at {node1=192.168.0.131:6789/0}
            election epoch 3, quorum 0 node1
     osdmap e10: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v22: 64 pgs, 1 pools, 0 bytes data, 0 objects
            12956 MB used, 21831 MB / 34788 MB avail
                  64 active+clean
                  
[email protected]$ ceph osd df
ID WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE  VAR  PGS 
 0 0.01659  1.00000 17394M  6478M 10915M 37.24 1.00  64 
 1 0.01659  1.00000 17394M  6478M 10915M 37.25 1.00  64 
              TOTAL 34788M 12956M 21831M 37.24          
MIN/MAX VAR: 1.00/1.00  STDDEV: 0
Fourth. expansion of clusters (expansion)
1. Add OSD

Add a Osd.2 on the Node1.

1) Create a directory



$ ssh node1$ sudo mkdir /var/local/osd2$ sudo chmod 777 /var/local/osd2/$ exit

2) Prepare the OSD



$ ceph-deploy osd prepare node1:/var/local/osd2

3) Activate OSD



$ ceph-deploy osd activate node1:/var/local/osd2

4) Check the health status of the cluster and the status of the OSD node:

[email protected]$ ceph -s
    cluster a3dd419e-5c99-4387-b251-58d4eb582995
     health HEALTH_OK
     monmap e1: 1 mons at {node1=192.168.0.131:6789/0}
            election epoch 3, quorum 0 node1
     osdmap e15: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v37: 64 pgs, 1 pools, 0 bytes data, 0 objects
            19450 MB used, 32731 MB / 52182 MB avail
                  64 active+clean

[email protected]$ ceph osd df
ID WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE  VAR  PGS 
 0 0.01659  1.00000 17394M  6478M 10915M 37.24 1.00  41 
 1 0.01659  1.00000 17394M  6478M 10915M 37.24 1.00  43 
 2 0.01659  1.00000 17394M  6494M 10899M 37.34 1.00  44 
              TOTAL 52182M 19450M 32731M 37.28          
MIN/MAX VAR: 1.00/1.00  STDDEV: 0.04


2. Add Monitors

Add monitoring nodes in Ndoe2 and Node3.

1) Modificationmon_initial_members,mon_hostandpublic networkconfiguration:


[global]
fsid = a3dd419e-5c99-4387-b251-58d4eb582995
mon_initial_members = node1,node2,node3
mon_host = 192.168.0.131,192.168.0.132,192.168.0.133
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd pool default size = 2
public network = 192.168.0.120/24

2) push to another node:



$ ceph-deploy --overwrite-conf config push node1 node2 node3

3) Add Monitoring node:



$ ceph-deploy mon add node2 node3

4) View the Monitoring node:


[email protected]$ ceph -s
    cluster a3dd419e-5c99-4387-b251-58d4eb582995
     health HEALTH_OK
     monmap e3: 3 mons at {node1=192.168.0.131:6789/0,node2=192.168.0.132:6789/0,node3=192.168.0.133:6789/0}
            election epoch 8, quorum 0,1,2 node1,node2,node3
     osdmap e25: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v3919: 64 pgs, 1 pools, 0 bytes data, 0 objects
            19494 MB used, 32687 MB / 52182 MB avail
                  64 active+clean


Installing a Ceph storage cluster under CentOS 7

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.