Kuberize Ceph RBD API Service

Source: Internet
Author: User
Tags docker run k8s
This is a creation in Article, where the information may have evolved or changed.

In the article "using Ceph RBD to provide storage volumes for kubernetes clusters", we mentioned that: with the integration of Kubernetes and Ceph, kubernetes can use Ceph RBD to provide persistent Volume for pods within a cluster. However, in this process, the creation and deletion of the image used by RBD need to be managed manually, and we have implemented programmatic management of RBD image based on Go-ceph, and our ultimate goal is to have this management service for RBD image with a k8s Service in the form of a kubernetes cluster, as described in the title of this article: Kuberize Ceph RBD API service.

One, dockerize Ceph RBD API Service

To make Ceph RBD API kuberizable, first dockerize the Ceph RBD API Service, which is containerized. Because Go-ceph is a go language developer, our RBD-REST-API is also developed in the go language. A well-known benefit of using go language development is that it can be compiled into a static binary file that can be run without relying on any external libraries and is born with a "Fit container" label. But since Go-ceph is a go binding for Librados and LIBRBD, it implements the link and invocation of the go language to C libraries through CGO. In this way, if we want to do static linking, then we will be ready to complete all librados and LIBRBD rely on the third-party library of. A (archive file). If you just execute the following compile command, you will get the W-line level error message output:

$ go build --ldflags '-extldflags "-static"' .

From the wrong information, we can get a variety of third-party libraries that are dependent on RBD-REST-API static compilation, including the Boost library (apt-get install Libboost-all-dev), Libssl (apt-get install Libssl) and LIBNSS3 (apt-get install Libnss3-dev). Install these libraries, and then modify the command line to reduce the compilation error output to less than hundred lines:

# go build --ldflags '-extldflags "-static -L /usr/lib/x86_64-linux-gnu -lboost_system -lboost_thread -lboost_iostreams -lboost_random -lcrypto -ldl -lpthread -lm -lz  -lc -L /usr/lib/gcc/x86_64-linux-gnu/4.8/ -lstdc++"' .

However, you will still get many mistakes:

... .../usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../lib/librados.a(Crypto.o): In function `CryptoAESKeyHandler::init(ceph::buffer::ptr const&, std::basic_ostringstream
  
   
   
    
   , std::allocator
   
    
     
     >&)':/build/ceph-10.2.3/src/auth/Crypto.cc:280: undefined reference to `PK11_GetBestSlot'/build/ceph-10.2.3/src/auth/Crypto.cc:291: undefined reference to `PK11_ImportSymKey'/build/ceph-10.2.3/src/auth/Crypto.cc:304: undefined reference to `PK11_ParamFromIV'/build/ceph-10.2.3/src/auth/Crypto.cc:282: undefined reference to `PR_GetError'/build/ceph-10.2.3/src/auth/Crypto.cc:293: undefined reference to `PR_GetError'... ...
   
    
  
   

These "undefined reference" points are in the Libnss3-dev library, but because the Libnss3-dev installation does not contain libnss3.a files, the LIBNSS3 is explicitly placed in the link parameter list, for example: "- Lnss3″ also cannot link success:

/usr/bin/ld: cannot find -lnss3

LIBNSS Library is really not a province oil lamp, after a few tossing found, want to use LIBNSS static archive, we can only hand compile, code here can get to: Https://github.com/nss-dev/nss, And this provides the manual compilation method of NSS.

As can be seen, pure static compilation Rbd-rest-api is very cumbersome, so we choose the default dynamic link, we only need to install the Librados and LIBRBD in the Docker image of the two dependent libraries, So the embryonic form of Rbd-rest-api's dockerfile is visible:

From ubuntu:14.04MAINTAINER Tony Bai # use aliyun source for ubuntu# before building image ,make sure copy /etc/apt/sources.list here# COPY sources.list /etc/apt/RUN apt-get update && apt-get install -y --no-install-recommends librados-dev librbd-dev \                   && rm -rf /var/lib/apt/lists/*RUN mkdir -p /root/rbd-rest-apiCOPY rbd-rest-api /root/rbd-rest-apiCOPY conf /root/rbd-rest-api/confRUN chmod +x /root/rbd-rest-api/rbd-rest-apiEXPOSE 8080WORKDIR /root/rbd-rest-apiENTRYPOINT ["/root/rbd-rest-api/rbd-rest-api"]

We have been conducting various tests in the Ubuntu 14.04.x environment, so we naturally chose ubuntu:14.04 as our base image to build the image:

# docker Build-t "Test/rbd-rest-api" .... Setting up Librados-dev (0.80.11-0ubuntu1.14.04.1) ... Setting up Librbd-dev (0.80.11-0ubuntu1.14.04.1) ... Processing triggers for Libc-bin (2.19-0ubuntu6.9) ...---> c987abc7a24dremoving intermediate container 5257ac37392astep 5:run mkdir-p/root/rbd-rest-api---> Running in dcabdb990c60---> ce0db2a027aaremoving intermed Iate container dcabdb990c60step 6:copy rbd-rest-api/root/rbd-rest-api---> 453fd4b9a27aremoving intermediate contai NER 8b07b5de7537step 7:copy conf/root/rbd-rest-api/conf---> e956add07d60removing intermediate container 6EAF6E4CF3 34Step 8:run chmod +x/root/rbd-rest-api/rbd-rest-api---> Running in cb278d1919c7---> 1e7b86072011removing inter Mediate container cb278d1919c7step 9:expose 8080---> Running in 6A3F457EEFCA---> e60cefb50f77removing intermedia  Te container 6a3f457eefcastep 10:workdir/root/rbd-rest-api---> Running in 703baf8c5564---> 6f1a5e5e145cremoving IntermediaTe container 703baf8c5564step 11:entrypoint/root/rbd-rest-api/rbd-rest-api---> Running in 16dd4e7e3995---> 43f8                                             85b958c7removing Intermediate container 16dd4e7e3995successfully built 43f885b958c7# Docker imagesrepository                                      TAG IMAGE ID CREATED Sizetest/rbd-rest-api Latest 43f885b958c7 seconds ago 298 MB

To test the boot image, note that we "read-only" mount the local path/etc/ceph:

# docker run --name rbd-rest-api --rm -p 8080:8080 -v /etc/ceph/:/etc/ceph/:ro test/rbd-rest-api2016/11/14 14:58:17 [I] [asm_amd64.s:2086] http server Running on http://:8080

Let's test the RBD-REST-API service in this docker:

# curl  -v   http://localhost:8080/api/v1/pools/* Hostname was NOT found in DNS cache*   Trying 127.0.0.1...* Connected to localhost (127.0.0.1) port 8080 (#0)> GET /api/v1/pools/ HTTP/1.1> User-Agent: curl/7.35.0> Host: localhost:8080> Accept: */*>< HTTP/1.1 200 OK< Content-Length: 130< Content-Type: application/json; charset=utf-8* Server beegoServer:1.7.1 is not blacklisted< Server: beegoServer:1.7.1< Date: Mon, 14 Nov 2016 14:59:29 GMT<{  "Kind": "PoolList",  "APIVersion": "v1",  "Items": [    {      "name": "rbd"    },    {      "name": "rbd1"    }  ]* Connection #0 to host localhost left intact}

Test OK.

What you have to mention here is that if you mount the/etc/ceph/ceph.conf only, then when the RBD-REST-API service receives the request, it returns:

Errcode=300, errmsg:error rados: No such file or directory

This is because the RBD-REST-API in the container does not see ceph.client.admin.keyring, so authentication fails when logging in to Ceph Monitor. Of course you can also not map the local directory, instead of/etc/ceph/ceph.conf and/etc/ceph/ceph.client.admin.keyring into the mirror, the latter method is not described in detail here. Librados give the wrong hint is really bad, it should be a privilege problem, incredibly said can not find Librados.

Second, Kuberize Ceph RBD API Service

The containerized test was successful, and the next step was to kuberize the Ceph RBD API. Depending on the design of the Docker image above, node hosting the Ceph RBD API Service pod must have the Ceph client installed, including ceph.conf and ceph.client.admin.keyring, so there is a selective scheduling of ceph RBD API Service to kubernetes node with the Ceph client installed is an issue that must be considered in this section.

Our idea is to dispatch Rbd-rest-api pod via k8s to k8s node with the specified label, we label the kubernetes cluster node, install the Ceph Client cluster node, and tag: zone= Ceph.

# kubectl label nodes 10.46.181.146 zone=ceph# kubectl label nodes 10.47.136.60 zone=ceph# kubectl get nodes --show-labelsNAME            STATUS    AGE       LABELS10.46.181.146   Ready     32d       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=10.46.181.146,zone=ceph10.47.136.60    Ready     32d       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=10.47.136.60,zone=ceph

The next step is to set the pod scheduling strategy in the RBD-REST-API service Yaml:

 //rbd-rest-api.yamlapiversion:extensions/v1beta1kind:deploymentmetadata:name:rbd-rest-apispec: Replicas:2 template:metadata:labels:app:rbd-rest-api spec:containers:-name:rbd-rest-        API Image:registry.cn-hangzhou.aliyuncs.com/xxxx/rbd-rest-api:latest #imagePullPolicy: ifnotpresent          Imagepullpolicy:always ports:-containerport:8080 volumemounts:-Mountpath:/etc/ceph Name:ceph-default-config-volume volumes:-Name:ceph-default-config-volume Hostpath:path :/etc/ceph nodeSelector:zone:ceph imagepullsecrets:-Name:rbd-rest-api-default-secret---apivers Ion:v1kind:servicemetadata:name:rbd-rest-api labels:app:rbd-rest-apispec:ports:-port:8080 selector:a Pp:rbd-rest-api  

We can see that there is a nodeselector in the deployment spec that allows K8S scheduler to select only node with Zone=ceph label when dispatching the service. Note about the settings for imagepullsecrets, you can refer to the article "Kubernetes How to pull a container image from private registry."

, Bigwhite. All rights reserved.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.