Glusterfs + Heketi to implement Kubernetes shared storage

Source: Internet
Author: User
Tags base64 glusterfs k8s gluster

[TOC]

Environment
Host name system IP Address role
ops-k8s-175 ubuntu16.04 192.168.75.175 K8s-master,glusterfs,heketi
ops-k8s-176 ubuntu16.04 192.168.75.176 K8s-node,glusterfs
ops-k8s-177 ubuntu16.04 192.168.75.177 K8s-node,glusterfs
ops-k8s-178 ubuntu16.04 192.168.175.178 K8s-node,glusterfs
Glusterfs Configuration Installation
# 在所有节点执行:apt-get install glusterfs-server glusterfs-common glusterfs-client fusesystemctl start glusterfs-serversystemctl enable glusterfs-server# 在175上执行:gluster peer probe 192.168.75.176gluster peer probe 192.168.75.176gluster peer probe 192.168.75.176
Test

Create a test volume

# 创建gluster volume create test-volume replica 2 192.168.75.175:/home/gluterfs/data 192.168.75.176:/home/glusterfs/data force# 激活卷gluster volume start test-volume# 挂载mount -t glusterfs 192.168.75.175:/test-volume /mnt/mytest

Expansion Test Volume

# 向卷中添加brickgluster volume add-brick test-volume 192.168.75.177:/home/gluterfs/data 192.168.75.178:/home/glusterfs/data force

Delete a test volume

gluster volume stop test-volumegluster volume delete test-volume
Introduction to Heketi Configuration deployment

Heketi is primarily used to provide a standard rest API on a gluterfs basis, typically for integration with kubernetes.

Heketi Project Address: Https://github.com/heketi/heketi

Download Heketi Related packages:
Https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-client-v5.0.1.linux.amd64.tar.gz
Https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gz

Modifying the Heketi configuration file

Modify the Heketi configuration file/etc/heketi/heketi.json as follows:

......#修改端口,防止端口冲突  "port": "18080",......#允许认证  "use_auth": true,......#admin用户的key改为adminkey      "key": "adminkey"......#修改执行插件为ssh,并配置ssh的所需证书,注意要能对集群中的机器免密ssh登陆,使用ssh-copy-id把pub key拷到每台glusterfs服务器上    "executor": "ssh",    "sshexec": {      "keyfile": "/root/.ssh/id_rsa",      "user": "root",      "port": "22",      "fstab": "/etc/fstab"    },......# 定义heketi数据库文件位置    "db": "/var/lib/heketi/heketi.db"......#调整日志输出级别    "loglevel" : "warning"

It should be explained that Heketi has three kinds of executor, mock, ssh, kubernetes, it is recommended to use mock in the test environment, the production environment uses SSH, when Glusterfs is deployed on the kubernetes as a container, Before using Kubernetes. Here we will deploy Glusterfs and Heketi independently, using SSH.

Configuring SSH Keys

When we configured Heketi in the above use of SSH executor, then we need Heketi server can be connected to all GLUSTERFS nodes through SSH key to manage operations, so need to Mr. SSH key

ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ‘‘chmod 600 /etc/heketi/heketi_key.pub# ssh公钥传递,这里只以一个节点为例ssh-copy-id -i /etc/heketi/heketi_key.pub [email protected]# 验证是否能通过ssh密钥正常连接到glusterfs节点ssh -i /etc/heketi/heketi_key [email protected]
Start Heketi
nohup heketi -config=/etc/heketi/heketi.json &
Production case

In my actual production, using Docker-compose to manage Heketi, instead of starting directly, the Docker-compose configuration example is given below:

version: "2"services:  heketi:    container_name: heketi    image: dk-reg.op.douyuyuba.com/library/heketi:5    volumes:      - "/etc/heketi:/etc/heketi"      - "/var/lib/heketi:/var/lib/heketi"      - "/etc/localtime:/etc/localtime"    network_mode: host
Heketi Add glusterfs Add cluster
heketi-cli --user admin -server http://192.168.75.175:18080 --secret adminkey --json  cluster create{"id":"d102a74079dd79aceb3c70d6a7e8b7c4","nodes":[],"volumes":[]}
Add 4 Glusterfs as node to cluster

Since we have Heketi authentication enabled, every time we perform a heketi-cli operation, we need to bring a bunch of authentication fields, which is troublesome, I create an alias here to avoid the related operation:

alias heketi-cli=‘heketi-cli --server "http://192.168.75.175:18080" --user "admin" --secret "adminkey"‘

Add nodes below

heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.175 --storage-host-name 192.168.75.175 --zone 1heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.176 --storage-host-name 192.168.75.176 --zone 1heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.177 --storage-host-name 192.168.75.177 --zone 1heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.178 --storage-host-name 192.168.75.178 --zone 1

When you see some documents that need to be deployed on CentOS, note the defaults requiretty in/etc/sudoers on each glusterfs, or add a second node to the error, Finally turn the log level well paid see the log there is a record sudo hint require TTY. Since I am directly deployed here on Ubuntu, none of the above issues exist. If you encounter this problem, you can follow the operation.

Add Device

In particular, Heketi only supports the use of bare partitions or bare disks as device and does not support file systems.

# --node参数给出的id是上一步创建node时生成的,这里只给出一个添加的示例,实际配置中,要添加每一个节点的每一块用于存储的硬盘heketi-cli  -json device add -name="/dev/vda2" --node "c3638f57b5c5302c6f7cd5136c8fdc5e"
Actual production configuration

It shows how to build cluster manually, add nodes to cluster, add device operations, and in our actual production configuration, it can be done directly through the configuration file.

Create a/etc/heketi/topology-sample.json file with the following content:

{"Clusters": [{"Nodes": [{"Node": {"ho                            Stnames ": {" Manage ": [" 192.168.75.175 "                            ], "Storage": ["192.168.75.175"                        ]}, "zone": 1}, "Devices": [                        "/dev/vda2"}, {"Node": {                            "Hostnames": {"Manage": ["192.168.75.176"                            ], "Storage": ["192.168.75.176" ]}, "zone": 1}, "De               Vices ": [         "/dev/vda2"}, {"Node": {                            "Hostnames": {"Manage": ["192.168.75.177"                            ], "Storage": ["192.168.75.177"  ]}, "zone": 1}, "Devices":                         ["/dev/vda2"]}, {"Node": { "Hostnames": {"Manage": ["192.168.75.1                            "Storage": ["192.168.75.178"]                    ]}, "zone": 1},     "Devices": [                   "/dev/vda2"]}]}]} 

Create:

heketi-cli  topology load --json topology-sample.json
Add Volume

This is just a test, in practice, the PVC is automatically created by kubernetes

If you add volume small words may prompt no Space, to solve this problem to add "BRICK_MIN_SIZE_GB" in Heketi.json: 1, 1 is 1G

......    "brick_min_size_gb" : 1,    "db": "/var/lib/heketi/heketi.db"......

Size is larger than BRICK_MIN_SIZE_GB, if set to 1 or min brick Limit,replica must be greater than 1

heketi-cli --json  volume create  --size 3 --replica 2

The following exception was thrown when the creation was performed:

Error: /usr/sbin/thin_check: execvp failed: No such file or directory  WARNING: Integrity check of metadata for pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e failed.  /usr/sbin/thin_check: execvp failed: No such file or directory  Check of pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e failed (status:2). Manual repair required!  Failed to activate thin pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e.

This requires the installation of the Thin-provisioning-tools package on all glusterfs node machines:

apt-get -y install thin-provisioning-tools

The return output that was successfully created is as follows:

Heketi-cli--json Volume Create--size 3--replica 2{"size": 3, "name": "vol_7fc61913851227ca2c1237b4c4d51997", " Durability ": {" type ":" Replicate "," replicate ": {" Replica ": 2}," disperse ": {" Data ": 4," Redundancy ": 2}}," snapshot ": {" Enable ": false," factor ": 1}," id ":" 7fc61913851227ca2c1237b4c4d51997 "," Cluster ":" Dae1ab512dfad0001c3911850cecbd61 " , "mount": {"Glusterfs": {"hosts": ["10.1.61.175", "10.1.61.178"], "Device": "10.1.61.175:vol_ 7fc61913851227ca2c1237b4c4d51997 "," Options ": {" backup-volfile-servers ":" 10.1.61.178 "}}," Bricks ": [{" id ":" 004F34FD4EB9E04CA3E1CA7CC1A2DD2C "," path ":"/var/lib/heketi/mounts/vg_d9fb2bec56cfdf73e21d612b1b3c1feb/brick_ 004f34fd4eb9e04ca3e1ca7cc1a2dd2c/brick "," Device ":" D9fb2bec56cfdf73e21d612b1b3c1feb "," Node ":" 20d14c78691d9caef050b5dc78079947 "," Volume ":" 7fc61913851227ca2c1237b4c4d51997 "," Size ": 3145728},{" id ":" 2876e9a7574b0381dc0479aaa2b64d46 "," path ":"/var/lib/heketi/mounts/vg_b7fd866d3ba90759d0226e26a790d71f/brick_ 2876e9a7574b0381dc0479aaa2b64d46/brick "," Device ":" B7fd866d3ba90759d0226e26a790d71f "," Node ":" 9cddf0ac7899676c86cb135be16649f5 "," Volume ":" 7fc61913851227ca2c1237b4c4d51997 "," size " : 3145728}]}
Configure Kubernetes to use Glusterfs

Reference https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

Create Storageclass

Add the Storageclass-glusterfs.yaml file with the following content:

apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata:  name: glusterfsprovisioner: kubernetes.io/glusterfsparameters:  resturl: "http://192.168.75.175:18080"  restauthenabled: "true"  restuser: "admin"  restuserkey: "adminkey"  

This is the way to directly write UserKey plaintext to the configuration file to create Storageclass, and the official recommendation is to save the key using secret. Examples are as follows:

# glusterfs-secret.yaml内容如下:apiVersion: v1kind: Secretmetadata:  name: heketi-secret  namespace: defaultdata:  # base64 encoded password. E.g.: echo -n "mypassword" | base64  key: TFRTTkd6TlZJOEpjUndZNg==type: kubernetes.io/glusterfs# storageclass-glusterfs.yaml内容修改如下:apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata:  name: glusterfsprovisioner: kubernetes.io/glusterfsparameters:  resturl: "http://10.1.61.175:18080"  clusterid: "dae1ab512dfad0001c3911850cecbd61"  restauthenabled: "true"  restuser: "admin"  secretNamespace: "default"  secretName: "heketi-secret"  #restuserkey: "adminkey"  gidMin: "40000"  gidMax: "50000"  volumetype: "replicate:2"

For more detailed usage reference: https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs

Create PVC

Glusterfs-pvc.yaml content is as follows:

kind: PersistentVolumeClaimapiVersion: v1metadata:  name: glusterfs-mysql1  namespace: default  annotations:    volume.beta.kubernetes.io/storage-class: "glusterfs"spec:  accessModes:  - ReadWriteMany  resources:    requests:      storage: 2Gi      kubectl create -f glusterfs-pvc.yaml
Creating pods, using PVC

Mysql-deployment.yaml content is as follows:

kind: DeploymentapiVersion: extensions/v1beta1metadata:  name: mysql  namespace: defaultspec:  replicas: 1  template:    metadata:      labels:        name: mysql    spec:      containers:      - name: mysql        image: mysql:5.7        imagePullPolicy: IfNotPresent        env:        - name: MYSQL_ROOT_PASSWORD          value: root123456        ports:          - containerPort: 3306        volumeMounts:        - name: gluster-mysql-data          mountPath: "/var/lib/mysql"      volumes:        - name: glusterfs-mysql-data          persistentVolumeClaim:            claimName: glusterfs-mysql1            kubectl create -f /etc/kubernetes/mysql-deployment.yaml

What needs to be explained is that I use the dynamic PVC method here to create the Glusterfs mount disk, there is a way to manually create a PVC, you can refer to: http://rdc.hundsun.com/portal/article/826.html

Glusterfs + Heketi to implement Kubernetes shared storage

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.