Kubernetes How to Mount Ceph RBD and CEPHFS

Source: Internet
Author: User
Tags base64 k8s

[TOC]

k8s Mount Ceph RBD

k8s Mount Ceph RBD There are two ways, one is the traditional way of PV&PVC, which means that the administrator needs to pre-create the relevant PV and PVC, and then the corresponding deployment or replication to mount the PVC use. After k8s 1.4, Kubernetes provides a more convenient way to dynamically create PV, that is, Storageclass. Using Storageclass, you do not have to create a fixed-size PV to wait for the consumer to create a PVC, but instead to create a PVC directly for use.

It is necessary to note that in order for k8s node nodes to perform the instructions to Mount Ceph RBD, the Ceph-common package needs to be installed on all nodes. It can be installed directly from Yum.

PV & PVC Way to create secret
#获取管理key并进行64位编码ceph|base64

Create a ceph-secret.yml file with the following content:

apiVersion: v1kind: Secretmetadata:  name: ceph-secretdata:#Please note this value is base64 encoded.# echo "keystring"|base64  key: QVFDaWtERlpzODcwQWhBQTdxMWRGODBWOFZxMWNGNnZtNmJHVGc9PQo=
Create PV

Create a test.pv.yml file with the following content:

apiVersion: v1kind: PersistentVolumemetadata:  name: test-pvspec:  capacity:    storage: 2Gi  accessModes:    - ReadWriteOnce  rbd:    #ceph的monitor节点    monitors:             - 10.5.10.117:6789      - 10.5.10.236:6789      - 10.5.10.227:6789    #ceph的存储池名字    pool: data     #在存储池里创建的image的名字    image: data             user: admin    secretRef:      name: ceph-secret    fsType: xfs    readOnly: false  persistentVolumeReclaimPolicy: Recycle
kubectl create -f test.pv.yml
Create PVC

Create a test.pvc.yml file with the following content:

kind: PersistentVolumeClaimapiVersion: extensions/v1beta1metadata:  name: test-pvcspec:  accessModes:    - ReadWriteOnce  resources:    requests:      storage: 2Gi
kubectl create -f test.pvc.yml
Create Deployment Mount PVC

Create a test.dm file with the following content:

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: testspec:  replicas: 1  template:    metadata:      labels:        app: test    spec:      containers:      - name: test        image: dk-reg.op.douyuyuba.com/op-base/openresty:1.9.15        ports:        - containerPort: 80        volumeMounts:          - mountPath: "/data"            name: data      volumes:        - name: data          persistentVolumeClaim:            claimName: test-pvc
kubectl create -f test.dm.yml
Storageclass Way to create secret

Because Storageclass requires Ceph's secret type to be KUBERNETES.IO/RBD, the secret created in the above PV & PVC mode cannot be used and need to be recreated. As follows:

# 其中key的部分为ceph原生的key,未使用base64重新编码kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" --from-literal=key=‘AQCikDFZs870AhAA7q1dF80V8Vq1cF6vm6bGTg==‘ --namespace=kube-systemkubectl create secret generic ceph-secret --type="kubernetes.io/rbd" --from-literal=key=‘AQCikDFZs870AhAA7q1dF80V8Vq1cF6vm6bGTg==‘ --namespace=default
Create Storageclass

Create a test.sc.yml file with the following content:

apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:  name: test-storageclassprovisioner: kubernetes.io/rbdparameters:  monitors: 192.168.1.11:6789,192.168.1.12:6789,192.168.1.13:6789  # Ceph 客户端用户ID(非k8s的)  adminId: admin  adminSecretName: ceph-secret  adminSecretNamespace: kube-system  pool: data  userId: admin  userSecretName: ceph-secret
Create PVC

Create a test.pvc.yml file with the following content:

kind: PersistentVolumeClaimapiVersion: v1metadata:  name: test-sc-pvc  annotations:     volume.beta.kubernetes.io/storage-class: test-storageclassspec:  accessModes:    - ReadWriteOnce  resources:    requests:      storage: 2Gi
kubectl create -f test.pvc.yml

As for mounting, the same way as PV & PVC does not repeat the instructions

k8s Mount CEPHFS

The above outlines the method of using k8s to mount a ceph RBD block device. Here's a simple way to k8s mount a ceph file system.

First, the secret can be reused directly from above, without having to create it separately. There is no need to create PV and PVC again. Mount it directly in the deployment, as follows:

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: testspec:  replicas: 1  template:    metadata:      labels:        app: test    spec:      containers:      - name: test        image: dk-reg.op.douyuyuba.com/op-base/openresty:1.9.15        ports:        - containerPort: 80        volumeMounts:          - mountPath: "/data"            name: data      volumes:        - name: data          cephfs:            monitors:              - 10.5.10.117:6789              - 10.5.10.236:6789              - 10.5.10.227:6789            path: /data            user: admin            secretRef:              name: ceph-secret

Kubernetes How to Mount Ceph RBD and CEPHFS

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.