First, pre-installation preparation
1.1 Introduction to installation Environment
It is recommended to install a Ceph-deploy management node and a three-node Ceph storage cluster to learn ceph, as shown in the figure.
I installed the Ceph-deploy on the Node1.
First three machines were prepared, the names of which were Ceph-node1, Ceph-node2, Ceph-node3
Host Name |
IP Address |
Role |
Note |
Ceph-node1 |
10.89.153.214 (External network), 10.0.1.101,10.0.2.101 |
Ceph-deploy,mon |
10.0.1.x is the public network, 10.0.2.x is the cluster network |
Ceph-node2 |
10.0.1.102,10.0.2.102 |
Osd |
|
Ceph-node3 |
10.0.1.103,10.0.2.103 |
Osd |
|
1.2 Configuring the Ceph node
Ceph-node1:
1. Change/etc/network/interfaces (Pending improvement)
cat<< EOF >>/etc/network/interfaces
Autoeth1
Ifaceeth1 inet Static
address10.89.153.214
gateway10.89.1.254
natmask255.255.0.0
dns-nameserver10.29.28.30
Autoeth2
Ifaceeth2 inet Static
Address 10.0.1.101
natmask255.255.255.0
Autoeth3
Ifaceeth3 inet Static
Address 10.0.2.101
natmask255.255.255.0
Eof
2. Change hostname
Sed-i "S/localhost/ceph-node1/g"/etc/hostname
3. Change the Hosts file
cat<< EOF >>/etc/hosts
10.0.1.101 Ceph-node1
10.0.1.102 Ceph-node2
10.0.1.103 Ceph-node3
Eof
Ceph-node2:
1. Change/etc/network/interfaces (Pending improvement)
cat<< EOF >>/etc/network/interfaces
Autoeth1
Ifaceeth1 inet Static
Address 10.0.1.102
natmask255.255.255.0
Autoeth2
Ifaceeth2 inet Static
Address 10.0.2.102
natmask255.255.255.0
Eof
2. Change hostname
Sed-i "S/localhost/ceph-node2/g"/etc/hostname
3. Change the Hosts file
cat<< EOF >>/etc/hosts
10.0.1.101 Ceph-node1
10.0.1.102 Ceph-node2
10.0.1.103 Ceph-node3
Eof
CEPH-NODE3 node:
1. Change/etc/network/interfaces (Pending improvement)
cat<< EOF >>/etc/network/interfaces
Autoeth1
Ifaceeth1 inet Static
Address 10.0.1.103
natmask255.255.255.0
Autoeth2
Ifaceeth2 inet Static
Address 10.0.2.103
natmask255.255.255.0
Eof
2. Change hostname
Sed-i "S/localhost/ceph-node3/g"/etc/hostname
3. Change the Hosts file
cat<< EOF >>/etc/hosts
10.0.1.101 Ceph-node1
10.0.1.102 Ceph-node2
10.0.1.103 Ceph-node3
Eof
installation of the 1.3ceph deployment tool
1. Add the Ceph repository to the Ceph-deploy Management node and install the Ceph-deploy. To add a publishing key:
Wget-q-o-' HTTPS://CEPH.COM/GIT/?P=CEPH.GIT;A=BLOB_PLAIN;F=KEYS/RELEASE.ASC ' | Sudoapt-key Add-Adds the Ceph package source, replacing {firefly} with stable ceph (such as cuttlefish, dumpling, Emperor, Ceph-stable-release, and so on). For example:
Echo debhttp://ceph.com/debian-{ceph-stable-release}/$ (LSB_RELEASE-SC) main | Sudotee/etc/apt/sources.list.d/ceph.list Update your warehouse and install Ceph-deploy:
sudo apt-get update && sudo apt-getinstall ceph-deploy
Node1 procedure: Add a Publishing key:
Wget-q-o-' HTTPS://CEPH.COM/GIT/?P=CEPH.GIT;A=BLOB_PLAIN;F=KEYS/RELEASE.ASC ' | sudo apt-keyadd-Add ceph package source, replace {firefly} with stable ceph (e.g. cuttlefish, dumpling, emperor, ceph-stable-release, etc.) Our cloud platform uses the hammer. For example:
Echo debhttp://ceph.com/debian-hammer/$ (LSB_RELEASE-SC) main | sudo tee/etc/apt/sources.list.d/ceph.list update your warehouse and install Ceph-deploy:
sudo apt-get update&& sudo apt-get install Ceph-deploy 1.4ceph node Installation:
1. Installing NTP
sudo apt-get install NTP
2. Installing the SSH server
sudo apt-get installopenssh-server
3. Create a Ceph user and assign sudo permissions, then switch to Cephuser,
sudo useradd-d/home/cephuser-m cephuser
sudo passwd cephuser
echo "Cephuser all = (root) nopasswd:all" | Sudotee/etc/sudoers.d/cephuser
sudo chmod 0440/etc/sudoers.d/cephuser
SSH cephuser@{host name}
Such as
SSH cephuser@ceph-node1
4. Configure password-free SSH login
1) Generate SSH key pair, but do not use sudo or root user
Ssh-keygen
2) Copy the public key to each Ceph node
Ssh-copy-id Cephuser@ceph-node3
Ssh-copy-id Cephuser@ceph-node2
Ssh-copy-id Cephuser@ceph-node3
3. New/home/cephuser/.ssh/config,
The command is as follows:
Mkdir-p Home/cephuser/.ssh
Touch/home/cephuser/.ssh/config
Cat << EOF >/home/cephuser/.ssh/config
Host Ceph-node1
Hostname Ceph-node1
User Cephuser
Host Ceph-node2
Hostname Ceph-node2
User Cephuser
Host Ceph-node3
Hostname Ceph-node3
User Cephuser
Eof
5./etc/sysconfig/network-scripts/ifcfg-ethx setting Onboot=yes
Ceph-node1:
Sudomkdir-p/etc/sysconfig/network-scripts
Sudotouch/etc/sysconfig/network-scripts/ifcfg-eth0
echo "Onboot=yes" |sudo Tee/etc/sysconfig/network-scripts/ifcfg-eth0
Sudotouch/etc/sysconfig/network-scripts/ifcfg-eth1
echo "Onboot=yes" |sudo tee/etc/sysconfig/network-scripts/ifcfg-eth1
Sudotouch/etc/sysconfig/network-scripts/ifcfg-eth2
echo "Onboot=yes" |sudo tee/etc/sysconfig/network-scripts/ifcfg-eth2
Ceph-node2:
sudo mkdir-p/etc/sysconfig/network-scripts
Sudotouch/etc/sysconfig/network-scripts/ifcfg-eth0
echo "Onboot=yes" |sudo Tee/etc/sysconfig/network-scripts/ifcfg-eth0
Sudotouch/etc/sysconfig/network-scripts/ifcfg-eth1
echo "Onboot=yes" |sudo tee/etc/sysconfig/network-scripts/ifcfg-eth1
CEPH-NODE3:
sudo mkdir-p/etc/sysconfig/network-scripts
Sudotouch/etc/sysconfig/network-scripts/ifcfg-eth0
echo "Onboot=yes" |sudo Tee/etc/sysconfig/network-scripts/ifcfg-eth0
Sudotouch/etc/sysconfig/network-scripts/ifcfg-eth1
echo "Onboot=yes" |sudo tee/etc/sysconfig/network-scripts/ifcfg-eth1
second, storage cluster 2.1 Creating a cluster directory
Create a directory on the management node ceph-node1 that holds the Ceph-deploy generated configuration file and key pair
Sshcephuser@ceph-node1
Mkdirmy-cluster
Cdmy-cluster 2.2 Creating a cluster
The following operations operate under the CEPH-NODE1 node
1. Create a cluster
Ceph-deploy New Ceph-node1
2. Change the default number of copies in the Ceph configuration file from 3 to 2 so that only two OSD can reach active + clean status. Add the following line to the [Global] section:
OSD Pool Default size = 2
echo "OSD Pool Default size = 2" | sudo tee-a ceph.conf
3. If you have multiple network cards, you can write publicnetwork to the [global] segment of the Ceph configuration file.
Public network ={ip-address}/{netmask}
echo "Public network=10.0.1.0/24" | sudo tee-a ceph.conf
4. Installing Ceph
Ceph-deploy Install Ceph-node1ceph-node2 ceph-node3--no-adjust-repos
5. Configure the initial monitor, and collect all keys
Ceph-deploy Mon create-initial
Check
These key rings should appear in the current directory after the above operation:
{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
2.3 Add two OSD.
For quick installation, this QuickStart uses the directory rather than the entire hard drive for the OSD daemon. Refer to the Ceph-deploy OSD for a separate drive or partition for use with the OSD and its logs.
1. Log in to the Ceph node and create a directory for the OSD daemon.
SSH Ceph-node2
sudo mkdir-p/var/local/osd0
Ls-l/var/local/osd0
Exit
SSH ceph-node3
sudo mkdir-p/VAR/LOCAL/OSD1
Ls-l/VAR/LOCAL/OSD1
Exit
Ceph-deploy OSD Prepareceph-node2:/var/local/osd0 CEPH-NODE3:/VAR/LOCAL/OSD1
Ceph-deploy OSD Activateceph-node2:/var/local/osd0 CEPH-NODE3:/VAR/LOCAL/OSD1
2. Use Ceph-deploy to copy the configuration file and admin key to the management node, and the Ceph node so that you do not have to specify the monitor address and ceph.client.admin.keyring every time you execute the ceph command line.
Ceph-deploy Admin Ceph-node1ceph-node2 ceph-node3
3. Ensure that the ceph.client.admin.keyring permissions are correct.
sudo chmod +r/etc/ceph/ceph.client.admin.keyring
9. Check Cluster health status
Ceph Health