Kubernetes creating a container to mount shared storage

Source: Internet
Author: User
Tags glusterfs

Original link: https://www.58jb.com/html/135.html

In the last MySQL container, the way to configure the host directory mount has been used, which is convenient but not secure enough to store data on a remote server, such as: Nfs,glusterfs,ceph, etc. generally, the current mainstream is the use of Ceph, GlusterFS;

This experiment uses the simplest way of NFS to configure an Nginx container by mounting shared storage;

Two machines:

kubernetes:10.0.10.135 [Centos7.2]

nfs:10.0.10.31 [Centos6.5 's machine]

Since Kubernetes is still the experimental machine, the NFS server is ready to go directly;

Server for NFS operation:

    1. Yum Install Rpcbind nfs-utils-y
    2. Mkdir-p/data/www-data

To add a shared directory configuration:

    1. Cat >/etc/exports<<-eof
    2. /data/www-data 10.0.10.0/24 (Rw,sync)
    3. Eof

Add to boot boot:

    1. Chkconfig Rpcbind on
    2. Chkconfig NFS On
    3. Service NFS Start
    4. Service Rpcbind Start

Check the configuration:

    1. [Email protected] ~]# Exportfs
    2. /data/www-data 10.0.10.0/24

Go back to the kubernetess machine to install a package:

    1. Yum Install Nfs-utils-y
After completion, you can use the command to check the NFS directory.
    1. [Email protected] ~]# showmount-e 10.0.10.31
    2. Export list for 10.0.10.31:
    3. /data/www-data 10.0.10.0/24
try to mount it;
    1. [email protected] ~]# Mount 10.0.10.31:/data/www-data/mnt
    2. [Email protected] ~]# ls/mnt
    3. CSS fonts img index.html JS

Because of the preparation of a number of documents, so you can see the effect, has been successfully mounted to show that the host can be mounted;

Create an RC and define the number of two containers, with the following configuration file:

  1. Cat >nginx_pod_volume_nfs.yaml<<-eof
  2. Apiversion:v1
  3. Kind:replicationcontroller
  4. Metadata
  5. Name:nginx
  6. Spec
  7. Replicas:2
  8. Selector
  9. App:web01
  10. Template
  11. Metadata
  12. Name:nginx
  13. Labels
  14. App:web01
  15. Spec
  16. Containers
  17. -Name:nginx
  18. Image:reg.docker.tb/harbor/nginx
  19. Ports
  20. -CONTAINERPORT:80
  21. Volumemounts:
  22. -Mountpath:/usr/share/nginx/html
  23. Readonly:false
  24. Name:nginx-data
  25. Volumes
  26. -Name:nginx-data
  27. Nfs:
  28. server:10.0.10.31
  29. Path: "/data/www-data"
  30. Eof
To create a container:
    1. [Email protected] test_418]# Kubectl create-f Nginx_pod_volume_nfs.yaml
    2. Replicationcontroller "Nginx" created

Check to see if the service is running:

  1. [[email protected] test_418]# Kubectl get pods
  2. NAME Ready STATUS Restarts
  3. Nginx-64zrd 1/1 Running 0 15s
  4. nginx-f0z39 1/1 Running 0 15s
  5. [[email protected] test_418]# Kubectl get RC
  6. NAME Desired current Ready age
  7. Nginx 2 2 1 8s
At this point, two containers have been successfully run, but cannot be accessed externally, and a service will be added here;

Create a service that is responsible for providing load externally;

    1. Cat >nginx_service.yaml<<-eof
    2. Apiversion:v1
    3. Kind:service
    4. Metadata
    5. Name:nginx-service
    6. Spec
    7. Externalips:
    8. -10.0.10.135
    9. Ports
    10. -port:8000
    11. Targetport:80
    12. Protocol:tcp
    13. Selector
    14. App:web01
    15. Eof

Check to see if the service is running successfully:

    1. [[email protected] test_418]# Kubectl Get Svc
    2. NAME cluster-ip external-ip PORT (S) Age
    3. Kubernetes 10.254.0.1 <none> 443/tcp 17d
    4. My-mysql 10.254.93.211 10.0.10.135 3306/tcp 7d
    5. Nginx-service 10.254.155.182 10.0.10.135 8000/tcp 4s

Open your browser and visit:

Http://10.0.10.135:8000/

Here is a cluster Oh! Provided by two containers, even if the deletion of one can be accessed, just because the configuration file is configured in the number of copies of 2, so delete any one will soon create a, the most important thing is not to manage it automatically added to the cluster;

Note: In fact, this way is similar to the local mount, but the remote directory attached to the Kubernetes host and then mounted to the container, this in the experimental environment found:

  1. [Email protected] test_418]# mount|grep "10.0.10.31"
  2. 10.0.10.31:/data/www-data on/var/lib/kubelet/pods/65f7cd9e-23ec-11e7-b0e2-000c29d4cebd/volumes/kubernetes.io~ Nfs/nginx-data type NFS4 (Rw,relatime,vers=4.0,rsize=65536,wsize=65536, namlen=255,hard,proto=tcp,port=0, Timeo=600,retrans=2,sec=sys,clientaddr= 10.0.10.135,local_lock= None,addr=10.0.10.31)  
  3. 10.0.10.31:/data/www-data on /var/lib/kubelet/pods/65f7db49-23ec-11e7-b0e2-000c29d4cebd/volumes/ kubernetes.io~nfs/nginx-data type nfs4  (Rw,relatime,vers=4.0,rsize=65536,wsize=65536, namlen=255,hard,proto=tcp,port=0, Timeo=600,retrans=2,sec=sys,clientaddr= 10.0.10.135,local_lock= None,addr=10.0.10.31)  

It is obvious that two pod containers are mounted with an identical directory, which is the directory where the remote NFS server is mounted;

Kubernetes creating a container to mount shared storage

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.