Kubernetes 1.5 stateful container via Ceph

Source: Internet
Author: User
Tags ssh server aliyun

In the previous blog post, we completed the Sonarqube deployment through Kubernetes's devlopment and service. Seems to be available, but there is still a big problem. We know that databases like MySQL need to keep data and not lose data. And the container is exactly the moment you exit, all data is lost. Once our Mysql-sonar container is restarted, any subsequent settings we make to Sonarqube will be lost. So we have to find a way to keep the MySQL data in the Mysql-sonar container. Kubernetes offers a variety of persistent data scenarios, including the use of HOSTPATH,NFS,FLOCKER,GLUSTERFS,RBD and more. We use the RBD block storage provided by Ceph to implement kubernetes persistent storage.

To use Ceph as storage, you first have to install Ceph, and here's a simple list of the procedures for installing Ceph via Ceph-deploy, first, the operating environment:

Server-117:admin-node Mon-node Client-nodeserver-236:osd-node Mon-nodeserver-227:osd-node Mon-node

Configure the Yum source on all machines:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Wget-o/etc/yum.repos.d/centos-base.repo Http://mirrors.aliyun.com/repo/Centos-7.repowget-O/etc/yum.repos.d/ Epel.repo http://mirrors.aliyun.com/repo/epel-7.repovim/etc/yum.repos.d/ceph.repo[ceph]name=cephbaseurl=http:// mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/gpgcheck=0[ceph-noarch]name=cephnoarchbaseurl=http:// Mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/gpgcheck=0

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

The NTP time synchronization is then configured on all machines, and the specific operation is not described.

Configure Admin-node to SSH-free access to other nodes. I'm here to do the following, using root directly, but in the official standard installation, a normal account is required, and this account cannot be named Ceph, because Ceph is the account that started the Ceph daemon by default. Give an example:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Useradd cephadminecho "Cephadmin" | passwd--stdin cephadmin requires all cephadmin to have the free sudo permission: echo "Cephd all = (root) nopasswd:all" | sudo tee/etc/sudoers.d/cephdsudo chmod 0440/etc/sudoers.d/cephdvim/etc/sudoersdefaults:cephadmin! Requiretty then use this account to complete ssh-free access between each other and finally edit ~/.ssh/config on Admin-node, with the following examples: Host server-117 Hostname server-117 User Cephadminhost server-236 Hostname server-236 user cephadminhost server-227 Hostname server-227 user cephadmin

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Deploying Ceph, all of the following operations are done on Admin-node:

650) this.width=650; "src="/img/fz.gif "alt=" copy Code "/>

Install ceph-deployyum install -y ceph-deploymkdir ceph-cluster     #创建部署目录, Will generate some necessary configuration files in this directory cd ceph-cluster if you have previously installed Ceph, the official recommendation is to use the following command to get a clean environment: ceph-deploy purgedata server-236  server-227ceph-deploy forgetkeysceph-deploy purge server-236 server-227 Create a Ceph  cluster:ceph-deploy new server-117 server-236 server-227    # Where server-117 is Mon-node, you can specify more than one mon-node after the command execution completes, some auxiliary files are synthesized in the current directory, where the ceph.conf default content is as follows:[global]fsid =  23078e5b-3f38-4276-b2ef-7514a7fc09ffmon_initial_members = server-117mon_host =  10.5.10.117auth_cluster_required = cephxauth_service_required = cephxauth_client_required  = cephx I add the following lines of content:public_network=10.5.10.0/24     #定义互相通信的公有网络mon_clock_drift_ allowed = 2     #定义多个mon节点之间时间的误差为2sosd_pool_default_size  = 2      #定义最少可以允许有两个osd, the default is 3, such asThe number of nodes is sufficient, can not be modified # The following three lines are configured to resolve the data node's storage disk as the Ext4 file system, "error: osd init failed: "  file  name too long "is wrong. Ceph's official recommended storage disk uses the XFS file system, but in some specific cases we can only use the Ext4 file system. Osd_max_object_name_len = 256osd_max_object_namespace_len = 64filestore_xattr_use_omap  = true

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

To perform the installation of Ceph:

Ceph-depoly server-server-server--y ceph CEPH-RADOSGW

Initialize Mon-node:

Ceph-deploy Mon create-

Upon completion of this process, several *.keyring files will appear in the current directory, which is required for secure access between CEPH components

This can be done at each node by Ps-ef | grep Ceph to see the health of the Ceph-mon related processes:

Ceph 31180 1 0 16:11? 00:00:04/usr/bin/ceph-mon-f--cluster ceph--id server-117--setuser ceph--setgroup ceph

Initialize Osd-node:

Starting Osd-node can be divided into two steps: Prepare and activate. Osd-node is the node that really stores the data, we need to provide independent storage space for CEPH-OSD, which is usually a separate disk, but can also be replaced with a directory.

Create a separate directory for storage on two Osd-node nodes:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

SSH server-236mkdir/data/osd0exitssh Server-227mkdir/data/osd1exit

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Perform the following actions:

#prepare操作会在上面创建的两个目录里创建一些后续activate激活以及osd运行时所需要的文件ceph-deploy OSD Prepare server-236:/data/osd0 Server-227:/data /osd1# Activate Osd-node and start Ceph-deploy OSD prepare server-236:/data/osd0 SERVER-227:/DATA/OSD1

After execution, you will typically throw an error similar to the following:

[Warnin] 2016-11-04 14:25:40.325075 7fd1aa73f800-1 * * Error:error creating Empty Object Store In/var/local/osd0: (13)  Permission Denied[error] Runtimeerror:command returned Non-zero exit Status:1[ceph_deploy][error] runtimeerror:failed To execute command:/usr/sbin/ceph-disk-v activate--mark-init upstart--mount/data/osd0

This is because Ceph's default daemon user is Ceph, and Ceph does not have access to directories created using normal ceph-admin or directories created with root.

So we need to do another authorization on the Osd-node:

Ceph.ceph-r/data/osd0server-227:chown ceph.ceph-r/DATA/OSD1

To synchronize the configuration files to each node:

Ceph-deploy Admin server-117 server-236 server-227

Note that if the configuration file has been modified, the synchronization operation needs to be re-executed and the Activate operation re-executed

View the cluster status by following the instructions:

Ceph-sceph OSD Tree

When the Ceph cluster installation is complete, we will create a corresponding RBD block for kubernetes storage. Before you create a block device, you need to create a storage pool, and Ceph provides a default storage pool called RBD. We then create a kube store dedicated to storing the block devices used by kubernetes, and subsequent operations are performed on Client-node:

Ceph OSD Pool Create Kube #后面两个100分别为pg-num and Pgp-num

Create an image file in the Kube storage pool called Mysql-sonar, which is 5GB in size:

RBD Create Kube/mysql-sonar--size 5120--image-format 2--image-feature layering

Note that the above command, in my environment, throws the following exception if you do not use--image-feature layering:

Rbd:sysfs Write FAILEDRBD image feature set mismatch. You can disable features unsupported by the kernel with "RBD feature disable". In some cases useful info are found in Syslog-try "DMESG | Tail "or So.rbd:map failed: (6) No such device or address

This is because my current CentOS 7.2 kernel does not support some of the new features of Ceph.

Map the image file created above as a block device:

RBD Map Kube/mysql-sonar--name client.admin

At this point, the operation on Ceph is complete.

Next we look at how to use the block device created by Ceph above on Kubernetes.

We can find the relevant sample files in the EXAMPLES/VOLUMES/RBD directory of Kubernetes source files as follows:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

[Email protected] ~]# ll/data/software/kubernetes/examples/volumes/rbd/
Total 12
-rw-r-----. 1 root root 962 Mar 8 08:26 Rbd.json
-rw-r-----. 1 root root 985 Mar 8 08:26 Rbd-with-secret.json
-rw-r-----. 1 root root 2628 Mar 8 08:26 readme.md
Drwxr-x---. 2 root root 8 08:26 secret

[Email protected] ~]# ll/data/software/kubernetes/examples/volumes/rbd/secret/
Total 4
-rw-r-----. 1 root root 156 Mar 8 08:26 Ceph-secret.yaml

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Where Rbd.json is a sample file that mounts the RBD device as a kubernetes volumes. Rbd-with-secret.json is a sample file that uses secret to mount the Ceph RBD. Ceph-secret.yaml is an example file for secret.

We can check the following ceph-secret.yaml files first:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Apiversion:v1kind:secretmetadata:name:ceph-secrettype: "KUBERNETES.IO/RBD" data:key:QVFCMTZWMVZvRjVtRXhBQTVrQ1Fz n2jcajhwvuxsdzi2qzg0see9pq==

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

We only need to change the key value of the last line. This value is encrypted with Base64. The value before processing can be obtained on ceph using the following command:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Ceph auth get--~]#/etc/ceph/= aqdrvl9yvy7vixaa7rko5s8owh6aidnu22oifw=== = =

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

We get this key and do base64 processing:

[Email protected] ~]# echo "aqdrvl9yvy7vixaa7rko5s8owh6aidnu22oifw==" | base64qvfeunzmovl2wtd2sxhbqtdsa081uzhpv0g2qwlkbnuymk9prnc9pqo=

So our revised CEPH-SECRET.YAML content is as follows:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Apiversion:v1kind:secretmetadata:name:ceph-secrettype: "KUBERNETES.IO/RBD" data:key:QVFEUnZMOVl2WTd2SXhBQTdSa081 uzhpv0g2qwlkbnuymk9prnc9pqo=

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Create a secret:

Kubectl create-f Ceph-secret

The data file life cycle, which is mounted directly using the volumes method, is released as The Pod is released, as is the pod. So it is not recommended to mount it directly using volumes, but to mount it using the PV method.

Let's start by creating a PV file:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Apiversion:v1kind:persistentvolumemetadata:name:mysql-sonar-pvspec:capacity:storage:5gi accessModes:-Rea Dwriteonce rbd:monitors:-10.5.10.117:6789-10.5.10.236:6789-10.5.10.227:6789 pool:kube ima Ge:mysql-sonar user:admin secretref:name:ceph-secret fstype:ext4 readonly:false persistentVolumeRe Claimpolicy:recycle

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Create a 5GB-sized PV:

Kubectl create-f mysql-sonar-pv.yml

Then create a PVC file:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Kind:persistentvolumeclaimapiversion:v1metadata:name:mysql-sonar-pvcspec:accessmodes:-ReadWriteOnce Resources : Requests:storage:5gi

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

Create a PVC:

Kubectl create-f mysql-sonar-pvc.yml

Finally, we modified the Mysql-sonar-dm.yml file created in the previous Article blog post, which reads as follows:

650) this.width=650; "src="/img/fz.gif "alt=" copy Code "/>

apiversion: extensions/v1beta1kind: deployment           metadata:  name: mysql-sonarspec:  replicas: 1                            #  selector:#    app: mysql-sonar                          template:    metadata:      labels:         app: mysql-sonar    spec:       containers:                              - name:  mysql-sonar        image: myhub.fdccloud.com/library/mysql-yd:5.6               ports:         - containerPort: 3306                    env:        -  name: mysql_root_password          value:  "Mysoft"         - name: MYSQL_DATABASE           value: sonardb        - name:  MYSQL_USER          value: sonar         - name: MYSQL_PASSWORD           value: sonar        volumemounts:         - name: mysql-sonar          mountpath : /var/lib/mysql      volumes:      -  name: mysql-sonar        persistentvolumeclaim:           claimname: mysql-sonar-pvc

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "/>

To create a MySQL pod:

Kubectl create-f mysql-sonar-dm.yml

This allows us to create a data persistence pod. We can test whether the data is lost by writing some data to the database, then removing the pod, and recreating a pod.

It is necessary to note that if the RBD device is not created in advance when the container is created, or if we delete the current pod at the time of the test, but it is not completely deleted, a new pod is started, and the pod will remain in the containercreating state. This time Kubelet log will have related error. For details, refer to: http://tonybai.com/2016/11/07/integrate-kubernetes-with-ceph-rbd/

It is also important to note that the Ceph-common package must be installed on all node nodes, otherwise an error similar to the following will appear when you start the container:

Ntvolume.setup failed for volume "KUBERNETES.IO/RBD/DA0DEFF5-0BEF-11E7-BF41-00155D0A2521-MYSQL-SONAR-PV" (spec. Name: "MYSQL-SONAR-PV") pod "da0deff5-0bef-11e7-bf41-00155d0a2521" (UID: "da0deff5-0bef-11e7-bf41-00155d0a2521") With:rbd:map failed executable file not found in $PATH



This article is from the "My Sky" blog, so be sure to keep this source http://sky66.blog.51cto.com/2439074/1934000

Kubernetes 1.5 stateful container via Ceph

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.