K8s Cluster Install the client on each of the above:
Ceph-deploy Install k8s IP address
Create a k8s action user :
Ceph auth Add client.k8s mon ' Allow rwx ' OSD ' Allow rwx '
Ceph auth get client.k8s-o/etc/ceph/ceph.client.k8s.keyring #导出新建用户的钥匙 to place the exported keys under the/etc/ceph/of each k8s
Ceph Auth List #查看权限
Creating a pool and mapping
1. Create a pool named K8spool
Ceph OSD Pool Create K8spool 2024
Ceph OSD Pool ls #查看池
2. Create a block map k8stest
RBD Create--size 1024x768 k8spool/k8stest
RBD ls--pool k8spool #查看建立的映射
3. Turn off options not supported by the CENTOS7 kernel
RBD feature Disable K8spool/k8stest exclusive-lock, Object-map, Fast-diff, Deep-flatten
K8s Operation:
1. Create the key:
grep key /etc/ceph/ceph.client.admin.keyring |awk ‘{printf "%s", $NF}‘|base64QVFBbW5SbFgyenJxRFJBQU9pdU9zMnNJSXRHaEFQNnRORGEzVmc9PQ==
2. Create K8S-PV
3. Create K8S-PVC
4. Create a test pod to see if you can mount Ceph
Successful view:
Kubectl describe Pod/ceph-rbd-pv-pod1 #查看是否成功
Kubernetes ceph RBD Mount steps