CentOS7 configuration glusterfs for kubernetes use

Source: Internet
Author: User
Tags glusterfs k8s gluster

CentOS7 configuration glusterfs for kubernetes use

[TOC]

1. Environmental statement

System: CentOS7, /data mount directory for non-system partitions
docker:1.13.1
kubernetes:1.11.1
glusterfs:4.1.2

2. Glusterfs Deployment

2 nodes, 192.168.105.97, 192.168.105.98

Installing with Yum

yum install centos-release-glusteryum -y install glusterfs glusterfs-fuse glusterfs-server

CentOS-Gluster-4.1.repo

Start up and set boot up

systemctl start glusterd systemctl enable glusterd

The Glusterfs communicates with each other via 24007 ports. The firewall requires an open port.

/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6# k8s 192.168.105.92 lab1  # master1192.168.105.93 lab2  # master2192.168.105.94 lab3  # master3192.168.105.95 lab4  # node4192.168.105.96 lab5  # node5# glusterfs192.168.105.98 glu1  # glusterfs1192.168.105.97 harbor1  # harbor1

Executing on the host glu1

#添加节点到集群执行操作的本机不需要probe本机gluster peer probe harbor1

View cluster status (nodes see each other's information)

gluster peer status
Number of Peers: 1Hostname: harbor1Uuid: ebedc57b-7c71-4ecb-b92e-a7529b2fee31State: Peer in Cluster (Connected)

GlusterFS Several volume modes description:
Links in more intuitive: https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/

    1. default mode, either DHT or distributed volume: Randomly distributes the file hash algorithm to a single server node for storage.
      Command format: gluster volume Create Test-volume server1:/exp1 server2:/exp2
    2. The
    3. copy mode, which is AFR, creates volume with replica x number: Copies the file to replica x nodes, and now recommends a 3-node arbiter replication mode, since 2 nodes can have a brain fissure.
      Command format: gluster volume Create Test-volume Replica 2 transport tcp SERVER1:/EXP1 SERVER2:/EXP2
      Gluster Volume Create Test-volume replica 3 arbiter 1 transport TCP SERVER1:/EXP1 SERVER2:/EXP2 server3:/exp3
    4. distributed replication mode, at least 4 nodes.
      Command format: gluster volume Create Test-volume Replica 2 transport TCP SERVER1:/EXP1 SERVER2:/EXP2 SERVER3:/EXP3 Server 4:/EXP4
    5. decentralized mode, requires a minimum of 3 nodes
      Command format: gluster volume Create test-volume disperse 3 server{1..3}:/bricks/ Test-volume
    6. Distributed decentralized mode, create a distributed dispersed volume, scatter keywords and < quantity > is mandatory, the number of specified bricks on the command line must be a multiple of the number of scatter
      command format: Gluster Volume Create &lt;volname&gt; Disperse 3 server1:/brick{1..6}
gluster volume create k8s_volume 192.168.105.98:/data/glusterfs/dev/k8s_volumegluster volume start k8s_volumegluster volume statusgluster volume info

Column some glusterfs tuning:

# 开启 指定 volume 的配额gluster volume quota k8s-volume enable# 限制 指定 volume 的配额gluster volume quota k8s-volume limit-usage / 1TB# 设置 cache 大小, 默认32MBgluster volume set k8s-volume performance.cache-size 4GB# 设置 io 线程, 太大会导致进程崩溃gluster volume set k8s-volume performance.io-thread-count 16# 设置 网络检测时间, 默认42sgluster volume set k8s-volume network.ping-timeout 10# 设置 写缓冲区的大小, 默认1Mgluster volume set k8s-volume performance.write-behind-window-size 1024MB
3. Client using Glusterfs

# # #3.1 Physical machine using Glusterfs volume

yum install -y centos-release-glusteryum install -y glusterfs glusterfs-fuse fuse fuse-libs openib libibverbsmkdir -p /tmp/testmount -t glusterfs 192.168.105.98:k8s_volume/tmp/test  # 和NFS挂载用法类似
3.2 Kubernetes using Glusterfs

The following operations operate on the Kubernetes master node

3.2.1 Creating GLUSTERFS Endpoint Definitions

vim /etc/kubernetes/glusterfs/glusterfs-endpoints.json

{  "kind": "Endpoints",  "apiVersion": "v1",  "metadata": {    "name": "glusterfs-cluster"  },  "subsets": [    {      "addresses": [        {          "ip": "192.168.105.98"        }      ],      "ports": [        {          "port": 1        }      ]    },    {      "addresses": [        {          "ip": "192.168.105.97"        }      ],      "ports": [        {          "port": 1        }      ]    }  ]}

Attention:
The subsets field should populate the address of the nodes in the Glusterfs cluster. You can provide any valid value (from 1 to 65535) in the Port field.

kubectl apply -f /etc/kubernetes/glusterfs/glusterfs-endpoints.jsonkubectl get endpoints
NAME                ENDPOINTS                                                     AGEglusterfs-cluster   192.168.105.97:1,192.168.105.98:1  
3.2.2 Configuring the Service

We also need to create services for these endpoints so that they can persist. We will add this service without a selector to tell Kubernetes that we want to add its endpoints manually

vim glusterfs-service.json

{  "kind": "Service",  "apiVersion": "v1",  "metadata": {    "name": "glusterfs-cluster"  },  "spec": {    "ports": [      {"port": 1}    ]  }}
3.3.3 Configuration Persistentvolume

Create Glusterfs-pv.yaml file, specify storage capacity and read/write properties

vim glusterfs-pv.yaml

apiVersion: v1kind: PersistentVolumemetadata:  name: pv001spec:  capacity:    storage: 10Gi  accessModes:    - ReadWriteMany  glusterfs:    endpoints: "glusterfs-cluster"    path: "k8s_volume"    readOnly: false
kubectl apply -f glusterfs-pv.yaml kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGEpv001     10Gi       RWX            Retain           Available                                      21s
3.3.4 Configuration Persistentvolumeclaim

Create glusterfs-pvc.yaml file, specify request resource size

vim glusterfs-pvc.yaml

apiVersion: v1kind: PersistentVolumemetadata:  name: pv001spec:  capacity:    storage: 10Gi  accessModes:    - ReadWriteMany  glusterfs:    endpoints: "glusterfs-cluster"    path: "k8s_volume"    readOnly: false
kubectl apply -f glusterfs-pvc.yamlkubectl get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGEpvc001    Bound     zk001     10Gi       RWX                           44s
3.3.5 Deploying applications Mount PVC

To create Nginx, attach the PVC to the folder inside the container /usr/share/nginx/html as an example:

vim glusterfs-nginx-deployment.yaml

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2kind: Deploymentmetadata:  name: nginx-dm  namespace: defaultspec:  selector:    matchLabels:      app: nginx  replicas: 2 # tells deployment to run 2 pods matching the template  template:    metadata:      labels:        app: nginx    spec:      containers:      - name: nginx        image: nginx        ports:          - containerPort: 80        volumeMounts:          - name: storage001            mountPath: "/usr/share/nginx/html"      volumes:        - name: storage001          persistentVolumeClaim:            claimName: pvc001
kubectl create -f nginx_deployment.yaml# 查看部署是否成功kubectl get pod|grep nginx-dm
nginx-dm-c8c895d96-hfdsz            1/1       Running   0          36snginx-dm-c8c895d96-jrfbx            1/1       Running   0          36s

Validation results:

# 查看挂载[[email protected] glusterfs]# kubectl exec -it nginx-dm-c8c895d96-5h649 -- df -h|grep nginx192.168.105.97:k8s_volume 1000G   11G  990G   2% /usr/share/nginx/html[[email protected] glusterfs]# kubectl exec -it nginx-dm-c8c895d96-zf6ch -- df -h|grep nginx192.168.105.97:k8s_volume 1000G   11G  990G   2% /usr/share/nginx/html[[email protected] glusterfs]# kubectl exec -it nginx-dm-c8c895d96-5h649 -- touch /usr/share/nginx/html/ygqygq2      [[email protected] glusterfs]# kubectl exec -it nginx-dm-c8c895d96-5h649 -- ls -lt /usr/share/nginx/html/total 1-rw-r--r--. 1 root root 4 Aug 13 09:43 ygqygq2-rw-r--r--. 1 root root 5 Aug 13 09:34 ygqygq2.txt[[email protected] glusterfs]# kubectl exec -it nginx-dm-c8c895d96-zf6ch -- ls -lt /usr/share/nginx/html/total 1-rw-r--r--. 1 root root 4 Aug 13 09:43 ygqygq2-rw-r--r--. 1 root root 5 Aug 13 09:34 ygqygq2.txt

The deployment is complete.

4. Summary

This article Glusterfs is installed under the physical system, not kubernetes, all need manual maintenance, the next introduction in Kubernetes installation using Gluster. Glusterfs's volume model is flexibly applied according to the business. It is important to note that if you use distributed volumes, the mounted directory files in the pod may exist in any node of the volume, and may not be directly df -h visible in that node.

Parameter data:
[1] https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[2] https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
[3] Https://www.kubernetes.org.cn/4069.html
[4] https://www.gluster.org/
[5] 79817078
[6] Https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
[7] https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Clients/
[8] Https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/README.md

CentOS7 configuration glusterfs for kubernetes use

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.