Kubernetes using Glusterfs for storage persistence

Source: Internet
Author: User
Tags glusterfs k8s gluster

GlusterFS

Glusterfs is an open-source, scale-out file system. These examples provide information about how to allow containers to use glusterfs volumes.

The example assumes that you have set up the Glusterfs server cluster and is ready to use the running Glusterfs volume in the container.

Prerequisite
The Kubernetes cluster has been built.

Installation of the Glusterfs cluster
Environment Introduction
OS System: Centos 7.x
Glusterfs of two nodes: 192.168.22.21,192.168.22.22

    1. Installing Glusterfs
      We use the Yum installation directly on the physical machine, if you choose to install on the kubernetes, please refer to:
      Https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md
# 先安装 gluster 源$ yum install centos-release-gluster -y# 安装 glusterfs 组件$ yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma glusterfs-geo-replication glusterfs-devel##创建 glusterfs 目录$ mkdir /opt/glusterd# 修改 glusterd 目录$ sed -i ‘s/var\/lib/opt/g‘ /etc/glusterfs/glusterd.vol# 启动 glusterfs$ systemctl start glusterd.service# 设置开机启动$ systemctl enable glusterd.service#查看状态$ systemctl status glusterd.service
    1. Configure Glusterfs
      [[email protected] ~]# vi /etc/hosts192.168.22.21   k8s-glusterfs-01192.168.22.22   k8s-glusterfs-02# 如果开启了防火墙则 开放端口[[email protected] ~]# iptables -I INPUT -p tcp --dport 24007 -j ACCEPT
Create a storage directory

[Email protected] ~]# Mkdir/opt/gfs_data

Adding nodes to the cluster does not require probe native to perform operations on the local machine

[Email protected] ~]# Gluster peer probe k8s-glusterfs-02

View cluster status

[Email protected] ~]# Gluster peer status
Number of Peers:1

hostname:k8s-glusterfs-02
Uuid:b80f012b-cbb6-469f-b302-0722c058ad45
State:peer in Cluster (Connected)

3. * * Configuration Volume****glusterfs Several volume mode description **1), default mode, both DHT, also known as distribution Volume: The file hash algorithm is randomly distributed to a server node storage.  Command format: Gluster volume create Test-volume server1:/exp1 server2:/exp22), copy mode, AFR, create volume with replica x number: Copy files to replica X Nodes in a single node. Command format: Gluster volume create Test-volume Replica 2 transport TCP SERVER1:/EXP1 server2:/exp23), stripe mode, striped, create volume with St Ripe x Number: The file is cut into chunks and stored in stripe x nodes (similar to RAID 0). Command format: Gluster volume create Test-volume Stripe 2 transport TCP SERVER1:/EXP1 SERVER2:/EXP24), distributed stripe mode (combo), requires a minimum of 4 servers to create. Stripe 2 Server = 4 nodes when creating volume: It is a combination of DHT and striped. Command format: Gluster volume create Test-volume Stripe 2 Transport tcp SERVER1:/EXP1 SERVER2:/EXP2 server3:/exp3 server4:/exp45), min Fabric Copy Mode (combo type) with a minimum of 4 servers to create. Replica 2 server = 4 nodes when creating volume: It is a combination of DHT and AFR. Command format: Gluster volume create Test-volume Replica 2 transport tcp SERVER1:/EXP1 SERVER2:/EXP2 server3:/exp3 server4:/exp46), Striped Replication Volume mode (combo), which requires a minimum of 4 servers to create. When creating volume stripe 2 Replica 2 server = 4 nodes: is a combination of striped and AFR. Command format: GlustEr volume create test-volume stripe 2 Replica 2 transport TCP SERVER1:/EXP1 SERVER2:/EXP2 server3:/exp3 server4:/exp47), three kinds Mode blending, which requires at least 8 servers to be created. Stripe 2 Replica 2, each of the 4 nodes constitute a group. Command format: Gluster volume create test-volume stripe 2 Replica 2 transport TCP SERVER1:/EXP1 SERVER2:/EXP2 SERVER3:/EXP3 server4: /EXP4 server5:/exp5 server6:/exp6 server7:/exp7 SERVER8:/EXP8

#创建GlusterFS磁盘:
[[email protected] ~]# Gluster volume create models replica 2 K8s-glusterfs-01:/opt/gfs_data k8s-glusterfs-02:/opt/gfs_ Data force
Volume Create:models:success:please start the volume to access data

View volume status

[Email protected] ~]# Gluster Volume info

Volume Name:k8s-volume
Type:distribute
Volume id:340d94ee-7c3d-451d-92c9-ad0e19d24b7d
status:created
Snapshot count:0
Number of bricks:1 x 2 = 2
Transport-type:tcp
Bricks:
Brick1:k8s-glusterfs-01:/opt/gfs_data
Brick2:k8s-glusterfs-02:/opt/gfs_data
Options reconfigured:
Transport.address-family:inet
Nfs.disable:on
Performance.client-io-threads:off

4. **Glusterfs调优**
To turn on quotas for a specified volume

$ gluster Volume Quota k8s-volume Enable

Limit quotas for a specified volume

$ gluster Volume Quota K8s-volume LIMIT-USAGE/1TB

Set cache size, default 32MB

$ gluster Volume set K8s-volume performance.cache-size 4GB

Set IO thread, too general to cause the process to crash

$ gluster Volume set K8s-volume performance.io-thread-count 16

Set network detection time, default 42s

$ gluster Volume set K8s-volume network.ping-timeout 10

Set the write buffer size, default 1M

$ gluster Volume set K8s-volume performance.write-behind-window-size 1024MB

# 客户端使用Glusterfs 物理机上使用Gluster的volume
Yum install-y glusterfs glusterfs-fusemkdir-p/opt/gfsmntmount-t glusterfs k8s-glusterfs-01:k8s-volume/opt/gfsmnt/

# #df查看挂载状态:

Df-h |grep K8s-volume

K8s-glusterfs-01:k8s-volume 46G 1.6G 44G 4%/opt/gfsmnt

# Kubernetes配置使用glusterfs:官方文档对配置过程进行了介绍:https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/README.md注:以下操作在kubernetes集群中任意一个可以执行kubectl的master上操作!1. **第一步在Kubernetes中创建GlusterFS端点定义**这是glusterfs-endpoints.json的片段:

"{
"Kind": "Endpoints",
"Apiversion": "V1",
"Metadata": {
"Name": "Glusterfs-cluster"
},
"Subsets": [
{
"Addresses": [
{
"IP": "192.168.22.21"
}
],
"Ports": [
{
"Port": 20
}
]
},
{
"Addresses": [
{
"IP": "192.168.22.22"
}
],
"Ports": [
{
"Port": 20
}
]
}
]
}

备:该subsets字段应填充GlusterFS集群中节点的地址。可以在port字段中提供任何有效值(从1到65535)。

# #创建端点:
[Email protected] ~]# Kubectl create-f Glusterfs-endpoints.json

# #验证是否已成功创建端点
[[email protected] ~]# kubectl get EP |grep Glusterfs-cluster
Glusterfs-cluster 192.168.22.21:20,192.168.22.22:20

2. **配置 service**我们还需要为这些端点创建服务,以便它们能够持久存在。我们将在没有选择器的情况下添加此服务,以告知Kubernetes我们想要手动添加其端点

[email protected]]# cat Glusterfs-service.json
{
"Kind": "Service",
"Apiversion": "V1",
"Metadata": {
"Name": "Glusterfs-cluster"
},
"Spec": {
"Ports": [
{"Port": 20}
]
}
}

##创建服务

[Email protected]]# Kubectl create-f Glusterfs-service.json

# #查看service
[[email protected]]# Kubectl Get Service | grep glusterfs-cluster
Glusterfs-cluster clusterip 10.68.114.26 <none> 20/tcp 6m

3. **配置PersistentVolume(简称pv)**创建glusterfs-pv.yaml文件,指定storage容量和读写属性

Apiversion:v1
Kind:persistentvolume
Metadata
name:pv001
Spec
Capacity:
Storage:10gi
Accessmodes:

    • Readwritemany
      Glusterfs
      Endpoints: "Glusterfs-cluster"
      Path: "K8s-volume"
      Readonly:false

Then execute:

# kubectl create -f glusterfs-pv.yamlkubectl get pvNAME      CAPACITY   ACCESS  MODES   RECLAIM POLICY   STATUS    CLAIM            STORAGECLASS   REASON    AGEpv001      10Gi             RWX         Retain       Bound                                 
    1. Configuring the Persistentvolumeclaim (PVC for short)
      Create a Glusterfs-pvc.yaml file that specifies the requested resource size
      apiVersion: v1kind: PersistentVolumeClaimmetadata:name: pvc001spec:accessModes:- ReadWriteManyresources:requests:  storage: 2Gi

Perform:

# kubectl create -f glusterfs-pvc.yaml# kubectl get pvcNAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGEpvc001    Bound     pv001     10Gi       RWX                           1h
    1. Deploying an app-mounted PVC
      To create Nginx, attach the PVC to the/usr/share/nginx/html folder in the container as an example:

Nginx_deployment.yaml files are as follows

apiVersion: extensions/v1beta1 kind: Deployment metadata:   name: nginx-dmspec:   replicas: 2  template:     metadata:       labels:         name: nginx     spec:       containers:         - name: nginx           image: nginx          ports:             - containerPort: 80          volumeMounts:            - name: storage001              mountPath: "/usr/share/nginx/html"      volumes:      - name: storage001        persistentVolumeClaim:          claimName: pvc001

Perform:

# kubectl create -f nginx_deployment.yaml查看nginx是否部署成功 # kubectl get podsNAME                          READY     STATUS    RESTARTS   AGEnginx-dm-5fbdb54795-77f7v     1/1       Running   0          1hnginx-dm-5fbdb54795-rnqwd     1/1       Running   0          1h查看挂载:# kubectl exec -it nginx-dm-5fbdb54795-77f7v  -- df -h |grep k8s-volume192.168.22.21:k8s-volume   46G  1.6G   44G   4% /usr/share/nginx/html创建文件:# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- touch /usr/share/nginx/html/123.txt查看文件属性:# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- ls -lt  /usr/share/nginx/html/123.txt -rw-r--r-- 1 root root 0 Jul  9 06:25 /usr/share/nginx/html/123.txt

Go back to Glusterfs's server data directory/opt/gfs_data see if there are 123.txt files

##192.168.22.21上查看:[[email protected] ~]# ls -lt /opt/gfs_data/总用量 0-rw-r--r-- 2 root root 0 7月   9 14:25 123.txt##192.168.22.22上查看:[[email protected] ~]# ls -lt /opt/gfs_data/总用量 0-rw-r--r-- 2 root root 0 7月   9 14:25 123.txt

The deployment is complete.

Kubernetes using Glusterfs for storage persistence

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.