Distributed Storage Ceph Preparation:
client50、node51、node52、node53为虚拟机client50:192.168.4.50 做客户机,做成NTP服务器 ,其他主机以50为NTP // echo “allow 192.168.4.0/24’ > /etc/chrony.confnode51:192.168.4.51 加三块10G的硬盘node52:192.168.4.52 加三块10G的硬盘node53:192.168.4.53 加三块10G的硬盘node54:192.168.4.54搭建源:真机共享mount /iso/rhcs2.0-rhosp9-20161113-x86_64.iso /var/ftp/ceph /var/ftp/ceph/rhceph-2.0-rhel-7-x86_64/MON/ /var/ftp/ceph/rhceph-2.0-rhel-7-x86_64/OSD/ /var/ftp/ceph/rhceph-2.0-rhel-7-x86_64/Tools/cat /etc/hosts //书写hosts文件,每台主机都要有这个配置 192.168.4.50 client50 192.168.4.51 node51 192.168.4.52 node52 192.168.4.53 node53# pscp.pssh node51:无密码连接 client50、node51、node52、node53ssh-keygen -f /root/.ssh/id_rsa -N ‘’ //非交互式生成密钥对将公钥发给其他主机及自己,实现ssh无密码登陆for i in 51 52 53; do ssh-copy-id 192.168.4.$i ; done
Distributed File System
分布式文件系统(Distributed File System)是指文件系统管理的物理存储资源不一定直接连接再本地节点上,而是通过计算机网络与节点相连分布式文件系统的设计基于客户机/服务器(C/S)模式常用分布式文件系统 Lustre,Hadoop,FastDFS,Ceph,GlusterFS
About Ceph
有官方(付费)的和开源的 Ceph 是一个分布式文件系统 具有高扩展、高可用、高性能的特点 Ceph 可以提供对象存储、块存储、文件系统存储 Ceph 可以提供PB级别的存储空间(PB-->TB-->-->GB) 软件定义存储(Software Defined Storage)作为存储,行业的一大发展趋势 官网:http://docs.ceph.org/start/intro
Ceph components
OSDs :存储设备 Monitors :集群监控组件 MDSs :存放文件系统的元数据(对象存储和块存储不需要该组件) 元数据:文件的信息,大小,权限等,即如下信息 drwxr-xr-x 2 root root 6 10月 11 10:37 /root/a.sh Client :ceph客户端
Experiment:
Use NODE51 as the deployment host node51:1. Install the Deployment software: Yum-y install Ceph-deploy//after installation, use the Ceph-deploy--help Help to create a directory for the deployment tool, hold the key and config file Mk Dir/root/ceph-cluster Cd/root/ceph-cluster2. Creating a ceph cluster Create a ceph cluster configuration (all nodes are Mon) ceph-deploy new node51 node52 node53 to all sections Click Install ceph package ceph-deploy install node51 node52 node53 Initialize all nodes of the Mon (Monitoring Program) service (configured with/etchosts hostname resolution on each host) Ceph-deploy Mon c Reate-initial3. Create osd-all nodes prepare disk partitions (take node51 For example, Node52,node53 also partition) 1) Specify the type of partition mode Parted/dev/vdb Mklabel gpt2) with the first 50% of this disk Space to build a partition, starting with 1M Parted/dev/vdb Mkpart primary 1M 50%3) Use this disk after 50% of space to build a partition Parted/dev/vdb Mkpart primary 50% 100%4) will this Two partition owners and all groups set to Ceph, give the cluster ceph administrative rights chown CEPH.CEPH/DEV/VDB1 Chown ceph.ceph/dev/vdb1 echo ' Chown CEPH.CEPH/DEV/VD b* ' >>/etc/rc.d/rc.local chmod +x/etc/rc.d/rc.local Note: These two partitions are used to do the log journal disk of the storage server-Initialize the erase disk data (administrative operations on NODE51 only) CD /root/ceph-cluster///must be operated under this directory Ceph-deploy disk zap NODE51:VDC node51:vdd ceph-deploy disk Zap NODE52:VDC Node52:vdd Ceph-deploy DisK Zap NODE53:VDC node53:vdd-create an OSD storage device (management operation on node51 only) Ceph-deploy OSD create NODE51:VDC:/DEV/VDB1 Node51:vdd:/dev /VDB2 >>host Node51 is now a ready-for-OSD use. Ceph-deploy OSD Create NODE52:VDC:/DEV/VDB1 node52:vdd:/dev/vdb2 >>host node52 is now ready for OSD use. Ceph-deploy OSD Create NODE53:VDC:/DEV/VDB1 node53:vdd:/dev/vdb2 >>host Node53 is now ready for OSD use.
Service View
NODE51: Services related to Ceph
[email protected] [email protected] ceph-mds.target ceph-osd.target [email protected] ceph-radosgw.target ceph-mon.target ceph.target [email protected]
NODE52: Services related to Ceph
[email protected] [email protected] [email protected] ceph-osd.target ceph-mds.target ceph-radosgw.target [email protected] ceph.target ceph-mon.target
Node53: Services related to Ceph
[email protected] [email protected] [email protected] ceph-osd.target ceph-mds.target ceph-radosgw.target [email protected] ceph.target ceph-mon.target
Deploying a Ceph Cluster
>>安装部署软件ceph-deploy>>创建Ceph集群>>创建OSD存储空间>>查看ceph状态,验证
Block storage
单机块设备:光盘、磁盘 分布式块存储:ceph、cider Ceph块设备也叫RADOS块设备 - RADOS block device:RBD Rbd驱动已经很好的集成在了linux内核中 Rbd提供了企业功能。如快照、COW(Copy Online Write,写时复制)克隆 COW对源文件做写操作时,旧数据会被复制到快照文件里。当删除文件或者对文件进行了内容的增加或者减少,源文件发生了改变,旧的文件就会拷贝到快照里 Rbd还支持内存缓存,从而能够大大提高性能
Block Storage Cluster
镜像池大小,基本存储为60G,为node51、node52、node53做存储磁盘的和 查看存储池(默认有一个rbd池)
ceph osd lspools 0 rbd
创建镜像, ##若不指定存储池,默认属于rbd存储池
rbd create demo-image --image-feature layering --size 10G //镜像名为demo-image , --image-feature layering(创建镜像的方式) 默认在rbd默认的存储池里创建rbd create rbd/image --image-feature layering --size 10G //rbd/image 指定在rbd池里创建
View image
rbd info demo-image rbd image ‘demo-image‘: size 10240 MB in 2560 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.1052238e1f29 format: 2 features: layering
Remove Mirror
rbd remove rbd/image rbd remove demo-image //次出仅提供错误时的方法,不实际执行缩小容量 //命令解释,重置大小为1G 镜像image 允许缩小 rbd resize --size 1G image --allow-shrink 扩大容量 //将容量扩大到2G rbd resize --size 2G image
Access through RBD in the cluster
1.node51 native use, map the mirror to a local disk
rbd map demo-image lsblk //查看本地磁盘 rbd0 251:0 0 10G 0 disk
2. partition, Format (partition named/dev/rbd0p1), Mount, same as local disk
3. Remove the image from the local Disk//Remove the mount from my computer before removing it
rbd unmap demo-image
Out-of-cluster client clinet50: Access via RBD
1.安装ceph-common软件包 yum -y install ceph-common.x86_64 2.拷贝配置文件(指示储存集群的位置) scp 192.168.4.51:/etc/ceph/ceph.conf /etc/ceph/ 3.拷贝连接接密钥(获取集群连接及使用的权限) scp 192.168.4.51:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ 4.查看集群镜像 ` rbd list demo-image Image ` 5.镜像映射到本地磁盘 rbd map image lsblk 6.显示本地影射 rbd showmapped id pool image snap device 0 rbd image - /dev/rbd0 7.分区、格式化(分区名为/dev/rbd0p1)、挂载,与本地磁盘无异 8.撤销磁盘映射,将镜像从本地磁盘中移除 //移除前需要将挂载从本机卸载掉 rbd unmap demo-image rbd unmap /dev/rbd/rbd/demo-image //两种方法
Create a mirrored snapshot
快照使用COW技术,对大数据快照速度会很快 COW对源文件做写操作时,旧数据会被复制到快照文件里 快照:保存某一时刻的所有信息,以备以后恢复使用,创建初期不占用磁盘空间,每当源文件发生改变时, 就把快照创建时的文件数据写入快照,这时快照开始占用磁 盘空间,大小为发生改变的文件大小的和。
Node51:
View an existing image
rbd list
To view a mirrored snapshot:
rbd snap ls image //暂时无显示
Create Snapshot (SNAP)
rbd snap create image --snap image-snap1 命令解释:rbd 快照 create 镜像名 --snap类型 快照名
View a mirrored snapshot again
rbd snap ls image SNAPID NAME SIZE 4 image-snap1
Recovering data using snapshots
rbd snap rollback image --snap image-snap1 客户机将镜像卸载后再重新挂载即可恢复数据
Deleting a snapshot
rbd snap rm image --snap image-snap1
Snapshot clones
- If you want to recover a new image from a snapshot, you can clone it.
- Note that the snapshot needs to be < protected > manipulated before cloning
- Protected snapshots cannot be deleted, unprotect
Snapshot protection:rbd snap protect image --snap image-snap1
Snapshot clones
rbd clone image --snap image-snap1 image-clone --image-feature layering
Clone view
rbd info image-clonerbd image ‘image-clone‘: size 2048 MB in 512 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.108a2ae8944a format: 2 features: layering flags: parent: rbd/[email protected] overlap: 2048 MB
Recovering data with a cloned image
rbd flatten image-clone
Cancel Protection:
rbd snap unprotect image --snap image-snap1
Client undo Disk Mapping1. Unload mount point
2. View RBD Disk Mappings
RBD showmapped
ID Pool Image Snap device
0 RBD Image-/dev/rbd0
3. Undo Disk Mappings
RBD Unmap demo-image
RBD Unmap/dev/rbd/rbd/demo-image//Two ways
Distributed Storage Ceph