Kubernetes Data Persistence Scheme

Source: Internet
Author: User
Tags sendfile docker registry k8s

Before we begin to introduce k8s persistent storage, it is important to understand the mechanisms and uses of k8s Emptydir and Hostpath, Configmap, and secret.

1, Emptydir
Emptydir is an empty directory, and his life cycle is exactly the same as the pod he belongs to, and Emptydir's main role is to share the files produced during the work process between different containers within the same pod. If the pod is configured with the Emptydir type volume, the pod is assigned to node and a emptydir is created, and as long as the pod is running on node, Emptydir is present (the container hangs without causing the emptydir to lose data), However, if the pod is deleted from node (pod is deleted, or pod is migrated), Emptydir will be deleted and permanently lost.

# cat emptydir.yaml apiVersion: v1kind: Podmetadata:   name: busyboxspec:  containers:   - name : busybox     image: registry.fjhb.cn/busybox     imagePullPolicy: IfNotPresent     command:      - sleep      - "3600"     volumeMounts:      - mountPath: /busybox-data        name: data  volumes:   - name: data     emptyDir: {}


2, Hostpath
Hostpath will load the specified volume on the host into the container, and if the Pod is rebuilt across hosts, its contents will be difficult to guarantee. This type of volume is generally used in conjunction with Daemonset. Hostpath allows the file system on node to be mounted inside the pod. If the pod needs to use something on node, you can use Hostpath, but it is recommended because the pod should not perceive node in theory.

# cat hostpath.yaml apiVersion: v1kind: Podmetadata:   name: busyboxspec:  containers:   - name : busybox     image: registry.fjhb.cn/busybox     imagePullPolicy: IfNotPresent     command:      - sleep      - "3600"     volumeMounts:      - mountPath: /busybox-data        name: data  volumes:   - hostPath:      path: /tmp     name: data


Emptydir and Hostpat Many scenarios are unable to meet the persistence requirement because the data cannot be transferred when the pod is migrated, which requires the support of the Distributed File system.

3, Configmap
In the process of using mirroring, it is often necessary to use configuration files, startup scripts, and so on to influence how containers run, and if only a small number of configurations are available, we can use environment variables to configure them. For some of the more complex configurations, however, K8S provides CONFIGMAP solutions.
The Configmap API resource stores key/value pair configuration data that can be used in pods.
Configmap is similar to secrets, but Configmap can more easily handle strings that do not contain sensitive information.
When Configmap is mounted as a data volume into the pod, update configmap (or delete the Rebuild Configmap), the configuration information that is mounted in the pod is hot-updated. At this point, you can add some scripts to monitor the configuration file changes, and then reload the corresponding service
The Configmap API is conceptually simple. From a data point of view, the type of configmap is just a key-value group. Applications can be configured from different angles. There are roughly three ways to use Configmap in one pod:
1. Command line Parameters
2. Environment variables
3. Data volume files

Make a variable into a configmap

Make nginx config file Configmap

# cat nginx.conf User nginx;worker_processes auto;error_log/etc/nginx/error.log;pid/run/nginx.pid;# Load dynamic Modu Les. see/usr/share/nginx/readme.dynamic.include/usr/share/nginx/modules/*.conf;events {worker_connections 1024;} HTTP {log_format main ' $remote _addr-$remote _user [$time _local] "$request" ' $status $body _by    Tes_sent "$http _referer" "$http _user_agent" "$http _x_forwarded_for";    Server_tokens off;    Access_log/usr/share/nginx/html/access.log main;    Sendfile on;    Tcp_nopush on;    Tcp_nodelay on;    Keepalive_timeout 65;    Types_hash_max_size 2048;    Include/etc/nginx/mime.types;    Default_type Application/octet-stream;    include/etc/nginx/conf.d/*.conf; server {[[email protected] ~]# cat nginx.conf user nginx;worker_processes Auto;error_log/etc/nginx/error.log;pid /run/nginx.pid;# Load dynamic modules. See/usr/share/nginx/readme.dynamic.include/usr/share/nginx/modules/*.conf;events {worker_connections 1024;} HTTP {log_format main ' $remote _addr-$remote _user [$time _local] "$request" ' $status $body _by    Tes_sent "$http _referer" "$http _user_agent" "$http _x_forwarded_for";    Server_tokens off;    Access_log/usr/share/nginx/html/access.log main;    Sendfile on;    Tcp_nopush on;    Tcp_nodelay on;    Keepalive_timeout 65;    Types_hash_max_size 2048;    Include/etc/nginx/mime.types;    Default_type Application/octet-stream;    include/etc/nginx/conf.d/*.conf;        server {Listen default_server;        Listen [::]:80 default_server;        server_name _;        root/usr/share/nginx/html;        include/etc/nginx/default.d/*.conf;            Location/{} Error_page 404/404.html; Location =/40x.html {} error_page 500 502 503504/50x.html; Location =/50x.html {}}}
# kubectl create configmap nginxconfig --from-file nginx.conf  # kubectl get configmap# kubectl get configmap -o yaml



Using Configmap in the RC configuration file

# cat nginx-rc-configmap.yaml   apiVersion: v1kind: ReplicationControllermetadata:  name: nginx  labels:    name: nginxspec:  replicas: 2  selector:    name: nginx  template:    metadata:      labels:        name: nginx    spec:      containers:      - name: nginx        image: docker.io/nginx        volumeMounts:        - name: nginx-etc          mountPath: /etc/nginx/nginx.conf          subPath: nginx.conf        ports:        - containerPort: 80      volumes:      - name: nginx-etc        configMap:         name: nginxconfig          items:          - key: nginx.conf             path: nginx.conf



Configmap information is actually stored in the ETCD, you can use Kubectl edit Configmap xxx to modify the Configmap

# etcdctl ls /registry/configmaps/default# etcdctl get /registry/configmaps/default/nginxconfig


4, Secret
Kubemetes provides secret to handle sensitive data, such as passwords, tokens, and keys, and secret provides a more secure mechanism (BASE64 encryption) to prevent data leakage than to configure sensitive data directly in the pod definition or image. Secret is created independently of the pod, mounted in the form of a data volume into the pod, secret data is saved as a file, and the container can fetch the required data by reading the file.
There are currently 3 types of secret:?
Opaque (default): any string?
Kubernetes.io/service-account-token: Acting on ServiceAccount
Kubernetes.io/dockercfg: Acting on Docker registry, users download Docker image authentication using
The specific configuration of Secert has been introduced in the previous article ServiceAccount, this article will not repeat.

Let's introduce the persistent storage scheme of k8s, the current storage scheme supported by K8s is as follows:
Distributed File System: NFS/GLUSTERFS/CEPHFS
Public cloud storage Scenario: Aws/gce/auzre

NFS Storage Scenarios
NFS is the abbreviation for the network file system, which is the web filesystem. Kubernetes can mount NFS into the pod by simply configuring it, while the data in NFS can be persisted, while NFS supports simultaneous write operations.

1. First install NFS

# yum -y install nfs-util*# cat /etc/exports/home 192.168.115.0/24(rw,sync,no_root_squash)# systemctl start rpcbind# systemctl start nfs# showmount -e 127.0.0.1Export list for 127.0.0.1:/home 192.168.115.0/24


2. Use Pod to mount NFS directly
To ensure that all node nodes in the cluster can mount NFS

# cat nfs.yaml apiVersion: v1kind: Podmetadata:   name: busyboxspec:  containers:   - name : busybox     image: registry.fjhb.cn/busybox     imagePullPolicy: IfNotPresent     command:      - sleep      - "3600"     volumeMounts:      - mountPath: /busybox-nfsdata        name: nfsdata  volumes:   - name: nfsdata     nfs:      server: 192.168.115.6      path: /home



3, using PV and PVC
In practical use, we usually divide the storage into PV and then bind to the pod for use with the PVC.
Pv:persistentvolume
Pvc:persistentvolumeclaim

Life cycle of PV and PVC:
Provisioning: Provides storage persistence support through out-of-cluster storage systems or public cloud storage solutions.
Static provision: The administrator manually creates multiple PV for use by the PVC.
Dynamically available: Dynamically create PVC-specific PV, and bind.

Bindings: The user creates the PVC and specifies the required resources and access modes. The PVC remains unbound until the available PV is found.

Use: The user can use the PVC in the pod just like with volume.

Release: The user deletes the PVC to reclaim the storage resources, and PV becomes the "released" state. Because the previous data is still retained, the data needs to be handled according to different policies, otherwise these storage resources cannot be used by other PVCs.

Recycling (Reclaiming): PV can be set up with three recycling strategies: Reserved (Retain), recycle (Recycle) and remove (delete)
Retention policy: Allows manual processing of persisted data.
Delete policy: The PV and externally associated storage resources will be removed and plug-in support is required.
Recycle policy: The cleanup operation will be performed and can then be used by the new PVC, which requires plug-in support.

PV Volume stage Status:
available– resources have not been used by PVC
The bound– volume has been bound to PVC
RELEASED–PVC is removed, the PV volume is released, but is not recycled by the cluster.
FAILED–PV Volume Auto-Recycle failed

Access mode for PV volumes
readwriteonce– single node read and write?
readonlymany– Multi-node read-only?
readwritemany– multiple node reads and writes

Creating PV and PVC

# cat nfs-pv.yaml apiVersion: v1kind: PersistentVolumemetadata:  name: pv-nfs-001 spec:  capacity:    storage: 5Gi   accessModes:  - ReadWriteMany   nfs:     path: /home    server: 192.168.115.6  

# cat nfs-pvc.yaml                  kind: PersistentVolumeClaimapiVersion: v1metadata:  name: nfs-dataspec:  accessModes:    - ReadWriteMany  resources:    requests:      storage: 5Gi


When PVC binds to PV, it is usually bound according to two conditions, one is the size of the storage and the other is the access mode.

Using PVC in the RC file

# cat nginx-rc-configmap.yaml   apiVersion: v1kind: ReplicationControllermetadata:  name: nginx  labels:    name: nginxspec:  replicas: 2  selector:    name: nginx  template:    metadata:      labels:        name: nginx    spec:      containers:      - name: nginx        image: docker.io/nginx        volumeMounts:        - name: nginx-data          mountPath: /usr/share/nginx/html        - name: nginx-etc          mountPath: /etc/nginx/nginx.conf          subPath: nginx.conf        ports:        - containerPort: 80      volumes:      - name: nginx-data        persistentVolumeClaim:         claimName: nfs-data      - name: nginx-etc        configMap:         name: nginxconfig          items:          - key: nginx.conf             path: nginx.conf

Kubernetes Data Persistence Scheme

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.