Pre-Preparation:
Planning: 8 Machines
IP hostname Role
192.168.2.20 Mon Mon.mon
192.168.2.21 OSD1 OSD.0,MON.OSD1
192.168.2.22 osd2 osd.1,mds.b (Standby)
192.168.2.23 OSD3 Osd.2
192.168.2.24 OSD4 Osd.3
192.168.2.27 Client Mds.a,mon.client
192.168.2.28 OSD5 Osd.4
192.168.2.29 Osd6 Osd.5
Turn off SELINUX
[Root@admin ceph]# sed-i ' s/selinux=enforcing/selinux=disabled/g '/etc/selinux/config
[Root@admin ceph]# Setenforce 0
Open the ports required by Ceph
[Root@admin ceph]# firewall-cmd--zone=public--add-port=6789/tcp--permanent
[Root@admin ceph]# firewall-cmd--zone=public--add-port=6800-7100/tcp--permanent
[Root@admin ceph]# Firewall-cmd--reload
Install NTP sync time
[[email protected] ceph]# yum-y install NTP ntpdate ntp-doc
[Email protected] ceph]# ntpdate 0.us.pool.ntp.org
[Email protected] ceph]# Hwclock--SYSTOHC
[Email protected] ceph]# Systemctl enable Ntpd.service
[Email protected] ceph]# systemctl start Ntpd.service
SSH password-free access:
[[email protected] ceph] #ssh-keygen
[[email protected] ceph] #ssh-copy-id {username} @node1
Installation:
Install Ceph (need to do it on each node)
The front to install the dependency package, about 20 or so, can also do side-mounted (more trouble, not recommended)
[[email protected] ceph] #yum install-y make automake autoconf boost-devel fuse-devel gcc-c++ libtool libuuid-devel Libb Lkid-devel keyutils-libs-devel cryptopp-devel fcgi-devel libcurl-devel expat-devel gperftools-devel Libedit-devel libatomic_ops-devel snappy-devel leveldb-devel libaio-devel xfsprogs-devel git libudev-devel Btrfs-progs
Install ceph using yum, configure the Yum source, configure the way reference official website, http://ceph.com/docs/master/install/get-packages/
[[email protected] ceph] #yum install-y ceph-deploy ceph
If not, you can download the Ceph version to install on the Ceph website and install it manually
Cluster configuration (manual, not recommended)
Mon Installation:
1. Assigning a unique ID to the cluster (that is, FSID)
[Email protected] ceph]# Uuidgen
d437c823-9d58-43dc-b586-6b36cf286d4f
2. To create a ceph configuration file, Ceph uses ceph.conf by default, where Ceph is the cluster name.
[[email protected] ceph] #sudo vi/etc/ceph/ceph.conf
Put the fsid you created above into ceph.conf
Fsid = d437c823-9d58-43dc-b586-6b36cf286d4f
3. Write the IP address of the initial monitor and the initial monitor to the Ceph configuration file, separated by commas
Mon Initial Members =mon
Mon host =192.168.2.20
4. Create a key ring for this cluster and generate a monitor key.
[[email protected] ceph] #ceph-authtool--create-keyring/tmp/ceph.mon.keyring--gen-key-n Mon.--cap Mon ' Allow * '
5. Generate the Administrator key ring, generate the Client.admin user and join the keyring.
[[email protected] ceph] #ceph-authtool--create-keyring/etc/ceph/ceph.client.admin.keyring--gen-key-n client.admin --set-uid=0--cap Mon ' Allow * '--cap OSD ' Allow * '--cap mds ' Allow '
6. Add the Client.admin key to the ceph.mon.keyring.
[[email protected] ceph] #ceph-authtool/tmp/ceph.mon.keyring--import-keyring/etc/ceph/ceph.client.admin.keyring
7. Generate a monitor diagram with the planned hostname, corresponding IP address, and FSID, and save as/tmp/monmap
[[email protected] ceph] #monmaptool--create--add Mon 192.168.2.20--fsid d437c823-9d58-43dc-b586-6b36cf286d4f/tmp/ Monmap
8. Create data directories on the monitor host separately
[Email protected] ceph]# Mkdir/var/lib/ceph/mon/ceph-mon
9. Assemble the initial data required by the daemon using the monitor graph and key ring.
[[email protected] ceph] #ceph-mon--mkfs-i Mon--monmap/tmp/monmap--keyring/tmp/ceph.mon.keyring
10. Modify the Ceph configuration file and the current configuration should include these
[Global]
Fsid = d437c823-9d58-43dc-b586-6b36cf286d4f
Mon Initial Members = Mon
Mon host = 192.168.2.20
Public network = 192.168.2.0/24
Cluster network = 192.168.2.0/24
auth_cluster_required = Cephx
Auth_service _equired = Cephx
auth_client_required = Cephx
Auth supported = None
OSD Journal size = 1024
#filestore xattr Use OMAP = True
OSD Pool Default size = 2
OSD Pool default min size = 1
OSD Pool Default PG num = 333
OSD Pool default PGP num = 333
OSD Crush Chooseleaf type = 1
[Mon]
Mon data =/var/lib/ceph/mon/$name
[Mon.mon]
Host=admin
Mon addr=192.168.2.20:6789
11. Build an empty file done to indicate that the monitor has been created and can be started:
[[email protected] ceph] #touch/var/lib/ceph/mon/ceph-mon/done
12. Start the Monitor
[[email protected] ceph]#/etc/init.d/ceph start Mon.mon
13. View Status
[Email protected] ceph]# ceph-s
Add Mon
You can have only one Mon on a host and now add Mon to other nodes
1. Create a default directory on the new monitor host:
[[email protected] ceph]# Mkdir/var/lib/ceph/mon/ceph-{mon-id}
such as: MKDIR/VAR/LIB/CEPH/MON/CEPH-OSD1
2. Get the Monitor key ring.
[[email protected] ceph]# ceph auth get mon.-o/tmp/ceph.mon.keyring
If this step is unsuccessful, you can directly copy the other MON nodes to the appropriate directory.
3. Get the Monitor run diagram
[Email protected] ceph]# ceph Mon getmap-o/tmp/ceph.mon.map
4. Prepare the Monitor data directory that you created in the first step. You must specify the monitor run diagram path to obtain the quorum for the Monitor and the information they fsid, and to specify the monitor key ring path.
[[email protected] ceph]# ceph-mon-i {mon-id}--mkfs--monmap/tmp/ceph.mon.map--keyring/tmp/ceph.mon.keyring
such as: [[email protected] ceph]# ceph-mon-i osd1--mkfs--monmap/tmp/ceph.mon.map--keyring/tmp/ceph.mon.keyring
5. Add the new monitor to the monitor list (runtime) of the cluster, which allows the node to be used when other nodes start to start
[[email protected] ceph]# ceph mon add <mon-id> <ip>[:<port>]
such as: [[email protected] ceph] #ceph Mon add osd1 192.168.2.21:6789
6. Start the new monitor and it will automatically join the machine. The daemon needs to know which address to bind to, and the--public-addr {ip:port} or the corresponding segment in ceph.conf to set Mon addr can be specified.
[[email protected] ceph] #ceph-mon-i {Mon-id}--public-addr {Ip:port}
such as: [[email protected] ceph] #ceph-mon-i osd1--public-addr 192.168.2.21:6789
Delete Mon:
[[email protected] ceph] #ceph Mon Remove Node1
Add OSD
1. Modify the Mon node/etc/ceph/ceph.conf file as follows:
[Global]
Fsid = d437c823-9d58-43dc-b586-6b36cf286d4f
Mon Initial Members =mon
Mon host =192.168.2.20
Public network = 192.168.2.0/24
Cluster network = 192.168.2.0/24
auth_cluster_required = Cephx
Auth_service _equired = Cephx
auth_client_required = Cephx
Auth supported = None
OSD Journal size = 1024
#filestore xattr Use OMAP = True
OSD Pool Default size = 2
OSD Pool default min size = 1
OSD Pool Default PG num = 333
OSD Pool default PGP num = 333
OSD Crush Chooseleaf type = 1
[Mon]
Mon data =/data/mon/$name
[Mon.mon]
Host=mon
Mon addr=192.168.2.20:6789
[OSD]
OSD Journal size = 1024
OSD Journal =/data/$name/journal
OSD data =/data/$name
[osd.0]
Host = OSD1
Devs =/dev/sda2
[Osd.1]
Host = Osd2
Devs =/dev/sda2
[Osd.2]
Host = Osd3
Devs =/dev/sda2
and copy to the node in the/etc/ceph directory where you want to create the OSD
[[email protected] ceph] #scp/etc/ceph/ceph.conf [email protected]:/etc/ceph/ceph.conf
[[email protected] ceph] #scp/etc/ceph/ceph.conf [email protected]:/etc/ceph/ceph.conf
[[email protected] ceph] #scp/etc/ceph/ceph.conf [email protected]:/etc/ceph/ceph.conf
[[email protected] ceph] #scp/etc/ceph/ceph.conf [email protected]:/etc/ceph/ceph.conf
2. Create a data directory and mount it on the OSD node, respectively
[[Email protected] ~] #mkdir/data/osd.0
[[Email protected] ~] #mount/dev/sda2/data/osd.0
The file system uses XFS, specifically how to partition references to other materials.
3. Create the OSD. If no UUID is specified, one will be assigned when the OSD is first started. The following command will output the OSD number after execution, and this number will be used in subsequent steps.
[[Email protected] ~] #uuidgen
8c907505-be2b-49ce-b30e-587d992fceec
[[Email protected] ~] #ceph OSD Create 8C907505-BE2B-49CE-B30E-587D992FCEEC
4. Initialize the OSD data directory
[[Email protected] ~] #ceph-osd-i 0--mkfs--mkkey--osd-uuid 8C907505-BE2B-49CE-B30E-587D992FCEEC
5. Register the key for this OSD.
[[Email protected] ~] #ceph auth add osd.0 osd ' Allow * ' mon ' Allow profile OSD '-i/data/osd.0/keyring
6. Add this node to the CRUSH diagram.
[[Email protected] ~] #ceph OSD Crush Add-bucket OSD1 Host
7. Put this Ceph node under the default root.
[[Email protected] ~] #ceph OSD Crush move OSD1 Root=default
8. After adding the OSD to the CRUSH diagram, it will be able to receive the data.
[[Email protected] ~] #ceph OSD Crush Add osd.0 1.0 HOST=OSD1
9. Start the OSD
[[Email protected] ~] #ceph-osd-i 0
10. Check the status, after the three OSD has been added successfully, the following will show the following content
[Email protected] osd.1]# ceph-s
Cluster d437c823-9d58-43dc-b586-6b36cf286d4f
Health HEALTH_OK
Monmap E1:1 mons at {mon=192.168.2.20:6789/0}, election epoch 2, quorum 0 Mon
Osdmap E22:3 osds:3 up, 3 in
Pgmap v58:192 pgs, 3 pools, 0 bytes data, 0 objects
3175 MB used, 5550 gb/5553 GB Avail
192 Active+clean
If you encounter error einval:entity osd.0 exists but key does not match errors when adding an OSD, execute:
[Email protected] ~]# ceph auth del osd.0
and re-install
Remove OSD
1.down off an OSD hard drive
[[email protected] ~]# Ceph OSD down 0
#down掉osd. 0 nodes
2. Remove an OSD hard disk from the cluster
[[email protected] ~]# Ceph OSD RM 0
Removed osd.0
3. Delete an OSD hard disk in the cluster crush map
[Email protected] ~]# ceph OSD Crush RM osd.0
4. Delete the host node of an OSD in the cluster
[Email protected] ~]# ceph OSD Crush RM node1
Removed item id-2 name ' Node1 ' from crush map
5. Ceph OSD Tree Detection
Add MDS server and Client Configuration
Add MDS Server
Method 1:
1. Add the MDS configuration to the Mon node/etc/ceph/ceph.conf and copy to the other node.
[Email protected] ceph]# VI ceph.conf
[mds.0]
Host=client
[[email protected] ceph] #scp/etc/ceph/ceph.conf [email protected]:/etc/ceph/ceph.conf
[[email protected] ceph] #scp/etc/ceph/ceph.conf [email protected]:/etc/ceph/ceph.conf
[[email protected] ceph] #scp/etc/ceph/ceph.conf [email protected]:/etc/ceph/ceph.conf
[[email protected] ceph] #scp/etc/ceph/ceph.conf [email protected]:/etc/ceph/ceph.conf
2. Create a directory for the MDS meta-data server
[[email protected] ceph] #mkdir-P/var/lib/ceph/mds/ceph-a
3. Create a key for the BOOTSTRAP-MDS client
[[email protected] ceph] #ceph-authtool--create-keyring/var/lib/ceph/bootstrap-mds/ceph.keyring--gen-key-n Client.bootstrap-mds
4. Create the BOOTSTRAP-MDS client in the Ceph Auth Library, giving permission to add the previously created key
[[email protected] ceph] #ceph auth add client.bootstrap-mds Mon ' Allow profile Bootstrap-mds '-i/var/lib/ceph/bootstrap- Mds/ceph.keyring
5. Create the mds.0 user in the Ceph Auth library and give the permissions and create the keys, stored in the/var/lib/ceph/mds/ceph-0/keyring file
[[email protected] ceph] #ceph--name client.bootstrap-mds--keyring/var/lib/ceph/bootstrap-mds/ceph.keyring Auth Get-or-create mds.a OSD ' Allow rwx ' mds ' Allow ' Mon ' Allow profile mds '-o/var/lib/ceph/mds/ceph-a/keyring
6. Start the MDS service process
[[email protected] ceph] #ceph-mds-i A
or [[email protected] ceph]# service Ceph start MDS.A
7. View cluster status
[Email protected] ceph]# ceph-s
Cluster d437c823-9d58-43dc-b586-6b36cf286d4f
Health HEALTH_OK
Monmap E1:1 mons at {mon=192.168.2.20:6789/0}, election epoch 2, quorum 0 Mon
Mdsmap e4:1/1/1 up {0=0=up:active}
Osdmap E22:3 osds:3 up, 3 in
Pgmap v60:192 pgs, 3 pools, 1884 bytes data, objects
3175 MB used, 5550 gb/5553 GB Avail
192 Active+clean
Client IO 2 b/s WR, 0 op/s
Method 2:
or direct execution
[[email protected] ceph] #ceph-mds-i client-n mds.0-c/etc/ceph/ceph.conf-m 192.168.2.20:6789
[[email protected] ceph] #ceph MDS Stat
8. Client Mount CEPHFS (requires MDS service)
The client wants to install the Ceph-fuse package first
[email protected] ceph]# Yum install Ceph-fuse–y
Create a Directory
[Email protected] ceph]# MKDIR/DATA/MYCEPHFS
Mount
[Email protected] ceph]# ceph-fuse-m 192.168.2.20:6789/DATA/MYCEPHFS
View
[Email protected] ceph]# df-h
File system capacity has been used with available% mount points
Devtmpfs 940M 0 940M 0%/dev
Tmpfs 948M 0 948M 0%/dev/shm
Tmpfs 948M 8.5M 940M 1%/run
Tmpfs 948M 0 948M 0%/sys/fs/cgroup
/dev/sda3 7.8G 1.7G 5.8G 22%/
/DEV/SDA1 1022M 9.8M 1013M 1%/boot/efi
/dev/sda2 1.9T 33M 1.9T 1%/data
Ceph-fuse 5.5T 3.2G 5.5T 1%/DATA/MYCEPHFS
RBD mode Mount (no MDS service required)
1. Create a ceph pool
Ceph OSD Pool Create {pool-name} {pg-num} [{Pgp-num}]
Such as:
[[email protected] ceph] #ceph OSD Pool Create rbdpool 100 100
2. Create a new image in the pool
[[email protected] ceph] #rbd create rbdpoolimages--size 1048576-p Rbdpool
Or
[[email protected] ceph] #rbd create rbdpool/rbdpoolimages--size 102400
Size own control
3. List the block devices in a specific pool
RBD ls {poolname}
[[email protected] ceph] #rbd ls rbdpool
4. Querying for image information within a pool
RBD--image {Image-name}-p {pool-name} info
Such as:
[[email protected] ceph] #rbd--image rbdpoolimages-p rbdpool Info
5. Map the image to the pool block device
[[email protected] ceph] #rbd map rbdpoolimages-p Rbdpool
(If you want to unmap a block device using the command: RBD unmap/dev/rbd1)
6. View image map
[[email protected] ceph] #rbd showmapped
7. Format the mapped device block
[[email protected] ceph] #mkfs. xfs/dev/rbd1
8. Mount the newly created partition
[[email protected] ceph] #mkdir/data/rbddir
[[email protected] ceph] #mount/dev/rbd1/data/rbddir
9. Modify the/etc/fstab file to add the Mount information.
/dev/rbd1/data/rbddir1 XFS Defaults 0 0
10. View
[[email protected] ceph] #df-th
Object File Upload Method (this method is not very convenient to use, not recommended)
1: Create a pool
#rados Mkpool
2: Upload Rados put {object-name} {File-path}--pool=putdir
Example
Rados put zabbix_client_install.tar.gz./zabbix_client_install.tar.gz--pool=putdir
3: View upload content:
Rados-p Putdir ls
Zabbix_client_install.tar.gz
4: Download Object file
Download Rados get {object-name} {File-path}--pool=putdir
Rados Get Zabbix_client_install.tar.gz/workspace/zabbix_client_install.tar.gz-p Putdir
CENTOS7 Installation Configuration Ceph