Ceph installation and deployment in CentOS7 Environment
Ceph Introduction
Eph is designed to build high performance, high scalibility, and high available on low-cost storage media, providing unified storage, file-based storage, block storage, and Object Storage. I recently read the relevant documentation and found it interesting. It has already provided Block Storage for openstack, which fits the mainstream trend.
Ceph deployment
1. Host preparation
The experiment environment is performed on a vmwarevm, mainly to gain a clear understanding of ceph.
Step 1: prepare five hosts
IP address Host Name)
192.168.1.110 admin-node (this host is used for management, and all subsequent ceph-deploy tools operate on this host)
192.168.1.111 node1 (Monitoring node)
192.168.1.112 node2 (osd.0 node)
192.168.1.113 node3 (osd.1 node)
192.168.1.114 client-node (the client uses it to mount the storage provided by the ceph cluster for testing)
Step 2: Modify the admin-node/etc/hosts file and add the following content:
192.168.1.111 node1
192.168.1.112 node2
192.168.1.113 node3
192.168.1.114 client-node
Note: The ceph-deploy tool communicates with other nodes through the host name. Run the following command to modify the Host Name: hostnamectl set-hostname "new name"
Step 3:
Ceph: (use root permission or have root permission)
Create user
Sudo adduser-d/home/ceph-m ceph
Set Password
Sudo passwd ceph
Set Account Permissions
Echo "ceph ALL = (root) NOPASSWD: ALL" | sudo tee/etc/sudoers. d/ceph
Sudo chomod 0440/etc/sudoers. d/ceph
Run the command mongodo to modify the suoders file:
Modify the ults requiretty line to Defaults: ceph! Requiretty
If you do not modify ceph-depoy, an error will occur when using ssh to execute the command.
2. configure admin-node and other node ssh without the password root permission to access other nodes.
Step 1: run the following command on the admin-node Host:
Ssh-keygen
Note: (for the sake of simplicity, you can directly confirm when executing the command)
Step 2: copy the key of step 1 to another node
Ssh-copy-id ceph @ node1
Ssh-copy-id ceph @ node2
Ssh-copy-id ceph @ node3
Ssh-copy-id ceph @ client-node
Modify at the same time ~ Add the following content to the/. ssh/config file:
Host node1
Hostname 192.168.1.111
User ceph
Host node2
Hostname 192.168.1.112
User ceph
Host node3
Hostname 192.168.1.113
User ceph
Host client-node
Hostname 192.168.1.114
User ceph
3. Install ceph-deploy for the admin-node
Step 1: add the yum configuration file
Sudo vim/etc/yum. repos. d/ceph. repo
Add the following content:
[Ceph-noarch]
Name = Ceph noarch packages
Base url = http://ceph.com/rpm-firefly/el7/noarch
Enabled = 1
Gpgcheck = 1
Type = rpm-md
Gpgkey = https://ceph.com/git? P = ceph. git; a = blob_plain; f = keys/release. asc
Step 2: update the software source and use the ceph-deploy and time synchronization software.
Sudo yum update & sudo yum install ceph-deploy
Sudo yum install ntp ntpupdate ntp-doc
4. Disable the firewall and Security Options of all nodes (executed on all nodes) and other steps
Sudo systemctl stop firewall. service
Sudo setenforce 0
Sudo yum install yum-plugin-priorities
Conclusion: After the above steps, the prerequisites are all ready, and then the ceph is actually deployed.
5. previously created ceph users created directories on the admin-node.
Mkdir my-cluster
Cd my-cluster
6. Create a cluster
Node relationship: node1 acts as the monitoring node, node2, node3 acts as the osd node, and admin-node acts as the management node. The relationship is shown in:
Step 1: run the following command to create a cluster with node1 as the monitoring node.
Ceph-deploy new node1
After executing this command, the ceph. conf file will be generated in the current directory. open the file and add the following content:
Osd pool default size = 2
Step 2: Use ceph-deploy to install ceph for the node
Ceph install admin-node node1 node2 node3
Step 3: Initialize the monitoring node and collect the keyring:
Ceph-deploy mon create-initial
6. allocate disk space for the osd process on the storage node:
Ssh node2
Sudo mkdir/var/local/osd0
Exit
Ssh node3
Sudo mkdir/var/local/osd1
Exit
Next, enable and activate osd processes on other nodes through ceph-deploy of the admin-node.
Ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
Ceph-deploy osd active node2:/var/local/osd0 node3:/var/local/osd1
Synchronize the configuration file and keyring of the admin-node to other nodes:
Ceph-deploy admin-node node1 node2 node3
Sudo chmod + r/etc/ceph. client. admin. keyring
Finally, run the following command to check the cluster health status:
Ceph health
If it succeeds, the following message is displayed: HEALTH_ OK.
Ceph storage space usage:
1. Prepare the client-node
Run the following command on the admin-node:
Ceph-deploy install client-node
Ceph-deploy admin client-node
2. Create a block device image:
Rbd create foo -- size 4096
Map the Block devices provided by ceph to client-node
Sudo rbd map foo -- pool rbd -- name client. admin
3. Create a File System
Sudo mkfs. ext4-m0/dev/rbd/foo
4. mount a File System
Sudo mkdir/mnt/test
Sudo mount/dev/rbd/foo/mnt/test
Cd/mnt/test
Finished !!!!!!!!!!!!