Use Kubernetes to manage containers on centos 7

Source: Internet
Author: User
Tags etcd openvswitch

Use Kubernetes to manage containers on centos 7
1. Preface

The previous section describes the Kubernetes system architecture, which gives you a preliminary understanding of Kubernetes. However, you may not know how to use Kubernetes. This article describes how to deploy and configure the network environment of the Kubernetes cluster locally and demonstrate cross-machine service communication through instances. The main content of this article is as follows:

  • Deployment Environment
  • Logic architecture of Kubernetes Clusters
  • Deploy Open vSwitch, Kubernetes, and Etcd Components
  • Demonstrate Kubernetes Container Management
2. Deployment Environment
  • VMware Workstation: 10.0.3
  • VMware Workstation Network Mode: NAT
  • Operating System Information: CentOS 7 64-bit
  • Open vSwitch version: 2.3.0
  • Kubernetes version: 0.5.2
  • Etcd version: 0.4.6
  • Docker version: 1.3.1
  • Server Information:

            | Role      | Hostname   | IP Address  ||:---------:|:----------:|:----------: ||APIServer  |kubernetes  |192.168.230.3||Minion     | minion1    |192.168.230.4||Minion     | minion2    |192.168.230.5|
3. logic architecture of Kubernetes Clusters

Before you deploy a Kubernetes cluster in detail, we will first show you the logical architecture of the cluster. It can be seen that the entire system is divided into two parts: the first part is Kubernetes APIServer, which is the core of the entire system and manages all containers in the cluster; the second part is minion, which runs Container Daemon, it is the place where all containers reside. At the same time, the Open vSwitch program is run on minion. Through GRE Tunnel, it is responsible for the network communication between pods of minion.

 

4. Deploy Open vSwitch, Kubernetes, and Etcd component 4.1 to install Open vSwitch and configure GRE

To solve the problem of Pod communication across minion, we install Open vSwtich on each minion and use GRE or VxLAN to enable inter-machine Pod communication. This article uses GRE, vxLAN is usually used in large-scale networks that require isolation. For detailed installation steps of Open vSwitch, refer to this blog. We will not detail the installation steps here. After the Open vSwitch is installed, a tunnel between minion1 and minion2. First, create an OVS Bridge on minion1 and minion2,

[root@minion1 ~]# ovs-vsctl add-br obr0

Next, create gre, add the new gre0 to obr0, and execute the following command on minion1,

[Root @ minion1 ~] # Ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type = gre options: remote_ip = 192.168.230.5

Run on minion2,

[root@minion2 ~]# ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.230.4

At this point, the tunnel between minion1 and minion2 has been established. Then we create the Linux bridge kbr0 on minion1 and minion2 to replace Docker's default docker0 (we assume that Docker is installed on both minion1 and minion2), and set the kbr0 address of minion1 to 172.17.1.1/24, the kbr0 address of minion2 is 172.17.2.1/24, and the obr0 interface is added as kbr0. The following command is executed on minion1 and minion2.

[Root @ minion1 ~] # Brctl addbr kbr0 // create a linux bridge [root @ minion1 ~] # Brctl addif kbr0 obr0 // Add an interface whose obr0 is kbr0 [root @ minion1 ~] # Ip link set dev docker0 down // set docker0 to down [root @ minion1 ~] # Ip link del dev docker0 // Delete docker0

To make the newly created kbr0 valid after each system restart, we create a ifcfg-kbr0 for minion1 in the/etc/sysconfig/network-scripts/directory as follows:

DEVICE=kbr0ONBOOT=yesBOOTPROTO=staticIPADDR=172.17.1.1NETMASK=255.255.255.0GATEWAY=172.17.1.0USERCTL=noTYPE=BridgeIPV6INIT=no

Also create a ifcfg-kbr0 on minion2, you just need to modify ipaddr to 172.17.2.1 and gateway to 172.17.2.0, then execute systemctl restart network to restart the system network service, on minion1 and minion2, you can find that kbr0 has a corresponding IP address. In order to verify whether the tunnel we created can communicate with each other, we ping the IP address of kbr0 of the other Party on minion1 and minion2. The following results show that the communication fails, this is because the routes to access 172.17.1.1 and 172.17.2.1 are missing on minion1 and minion2. Therefore, we need to add routes to ensure communication between them.

[root@minion1 network-scripts]# ping 172.17.2.1PING 172.17.2.1 (172.17.2.1) 56(84) bytes of data.^C--- 172.17.2.1 ping statistics ---2 packets transmitted, 0 received, 100% packet loss, time 1000ms[root@minion2 ~]#  ping 172.17.1.1PING 172.17.1.1 (172.17.1.1) 56(84) bytes of data.^C--- 172.17.1.1 ping statistics ---2 packets transmitted, 0 received, 100% packet loss, time 1000ms

Because the route added through ip route add will expire after the next system restart, So we create a new file route-eth0 storage route under the/etc/sysconfig/network-scripts directory, it should be noted that the route-eth0 and the ifcfg-eth0 of the simhei part must be consistent, otherwise it cannot work, so that the added route will not expire after the next restart. To ensure that the two minion kbr0 can communicate with each other, we add route 172.17.2.0/24 via 192.168.230.5 dev eno16777736 in the route-eth0 of minion1, eno16777736 is the NIC of minion1, also add route 172.17.1.0/24 via 192.168.230.4 dev eno16777736 in the route-eth0 of minion2. Verify again after restarting the network service. ping each other's kbr0 addresses, for example:

[root@minion2 network-scripts]# ping 172.17.1.1PING 172.17.1.1 (172.17.1.1) 56(84) bytes of data.64 bytes from 172.17.1.1: icmp_seq=1 ttl=64 time=2.49 ms64 bytes from 172.17.1.1: icmp_seq=2 ttl=64 time=0.512 ms^C--- 172.17.1.1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1002msrtt min/avg/max/mdev = 0.512/1.505/2.498/0.993 ms

Now we have established a tunnel between two minion and can work correctly. The following describes how to install Kubernetes APIServer, kubelet, proxy, and other services.

4.2 install Kubernetes APIServer

Before installing APIServer, we first download Kubernetes and Etcd for some preparation. The specific operations on kubernetes are as follows:

[root@kubernetes ~]# mkdir /tmp/kubernetes[root@kubernetes ~]# cd /tmp/kubernetes/[root@kubernetes kubernetes]# wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.5.2/kubernetes.tar.gz[root@kubernetes kubernetes]# wget https://github.com/coreos/etcd/releases/download/v0.4.6/etcd-v0.4.6-linux-amd64.tar.gz

Decompress the downloaded kubernetes and etcd packages, and create the/opt/kubernetes/bin directory on kubernetes, minion1, and minion2,

[root@kubernetes kubernetes]# mkdir -p /opt/kubernetes/bin[root@kubernetes kubernetes]# tar xf kubernetes.tar.gz[root@kubernetes kubernetes]# tar xf etcd-v0.4.6-linux-amd64.tar.gz[root@kubernetes kubernetes]# cd ~/kubernetes/server[root@kubernetes server]# tar xf kubernetes-server-linux-amd64.tar.gz[root@kubernetes kubernetes]# /tmp/kubernetes/kubernetes/server/kubernetes/server/bin

Copy kube-apiserver, kube-controller-manager, kube-schedfg, and kubecfg to the/opt/kubernetes/bin directory of kubernetes, while kubelet, kube-proxy is copied to/opt/kubernetes/bin of minion1 and minion2, And it is executable.

[root@kubernetes amd64]# cp kube-apiserver kube-controller-manager kubecfg kube-scheduler /opt/kubernetes/bin[root@kubernetes amd64]# scp kube-proxy kubelet root@192.168.230.4:/opt/kubernetes/bin[root@kubernetes amd64]# scp kube-proxy kubelet root@192.168.230.5:/opt/kubernetes/bin

To deploy only one etcd server, if you need to deploy the etcd cluster, refer to the official documentation and deploy it on the same machine as the Kubernetes APIServer in this article, in addition, place etcd in/opt/kubernetes/bin. etcdctl is in the same directory as ectd.

[root@kubernetes kubernetes]# cd /tmp/kubernetes/etcd-v0.4.6-linux-amd64[root@kubernetes etcd-v0.4.6-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin

Note that files in the/opt/kubernetes/bin directory of kubernetes and minion must be executable. At present, we have prepared almost the same work. Now we start to configure the unit file for apiserver, controller-manager, scheduler, and etcd. First, we use the following script etcd. sh to configure the etcd unit file,

#!/bin/shETCD_PEER_ADDR=192.168.230.3:7001ETCD_ADDR=192.168.230.3:4001ETCD_DATA_DIR=/var/lib/etcdETCD_NAME=kubernetes! test -d $ETCD_DATA_DIR && mkdir -p $ETCD_DATA_DIRcat <<EOF >/usr/lib/systemd/system/etcd.service[Unit]Description=Etcd Server[Service]ExecStart=/opt/kubernetes/bin/etcd \\    -peer-addr=$ETCD_PEER_ADDR \\    -addr=$ETCD_ADDR \\    -data-dir=$ETCD_DATA_DIR \\    -name=$ETCD_NAME \\    -bind-addr=0.0.0.0[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable etcdsystemctl start etcd

The scripts configured for the unit files of the remaining apiserver, controller-manager, and scheduler can be found on GetStartingKubernetes on github, and will not be listed here. After running the script, the etcd, APIServer, controller-manager, and scheduler services on the apiserver can run normally.

4.3 install Kubernetes Kubelet and Proxy

According to the Kubernetes design architecture, docker, kubelet, and kube-proxy must be deployed on minion. When APIServer is deployed in section 4.2, we have already distributed kubelet and kube-proxy to two minion, so we only need to configure the unit file of docker, kubelet, and proxy, and then start the service. For detailed configuration, see GetStartingKubernetes.

5. Demonstration of Kubernetes Container Management

For convenience, we use the example Guestbook provided by Kubernetes to demonstrate how Kubernetes manages containers running across machines. Next we create containers and services according to the steps of Guestbook. In the following process, if this is the first operation, there may be a certain wait time, And the status is pending, because it takes some time to download images for the first time.

5.1 create redis-master Pod and redis-master service
[root@kubernetes ~]# cd /tmp/kubernetes/kubernetes/examples/guestbook[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 -c redis-master.json create pods[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 -c redis-master-service.json create services

After completing the above operations, we can see that the following redis-master Pod is scheduled to 192.168.230.4.

[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 list podsName                                   Image(s)                   Host                Labels                                       Status----------                             ----------                 ----------          ----------                                   ----------redis-master                           dockerfile/redis           192.168.230.4/      name=redis-master                            Running

However, in addition to the redis-master service, there are two default Kubernetes-ro and kubernetes services in the kubernetes system. In addition, we can see that each service has a service IP address and corresponding port. The service IP address is a virtual address, which is selected based on the cidr ip address segment set by the portal_net option of the apiserver, set it to 10.10.10.0/24 in our cluster. For this reason, each time a new service is created, the apiserver randomly selects an IP address in this address segment as the IP address of the service, and the port is determined in advance. For the redis-master service, the service address is 10.10.206 and the port is 6379.

[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 list servicesName                Labels              Selector                                  IP                  Port----------          ----------          ----------                                ----------          ----------kubernetes-ro                           component=apiserver,provider=kubernetes   10.10.10.207        80redis-master        name=redis-master   name=redis-master                         10.10.10.206        6379kubernetes                              component=apiserver,provider=kubernetes   10.10.10.161        443
5.2 create redis-slave Pod and redis-slave services
[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 -c redis-slave-controller.json create replicationControllers[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 -c redis-slave-service.json create services

THE list command shows that the newly created redis-slave Pod is scheduled to two minion Servers Based on the scheduling algorithm. The service IP address is 10.10.10.92 and the port is 6379.

[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 list podsName                                   Image(s)                   Host                Labels                                       Status----------                             ----------                 ----------          ----------                                   ----------redis-master                           dockerfile/redis           192.168.230.4/      name=redis-master                            Running8c0ddbda-728c-11e4-8233-000c297db206   brendanburns/redis-slave   192.168.230.5/      name=redisslave,uses=redis-master            Running8c0e1430-728c-11e4-8233-000c297db206   brendanburns/redis-slave   192.168.230.4/      name=redisslave,uses=redis-master            Running[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 list servicesName                Labels              Selector                                  IP                  Port----------          ----------          ----------                                ----------          ----------redisslave          name=redisslave     name=redisslave                           10.10.10.92         6379kubernetes                              component=apiserver,provider=kubernetes   10.10.10.161        443kubernetes-ro                           component=apiserver,provider=kubernetes   10.10.10.207        80redis-master        name=redis-master   name=redis-master                         10.10.10.206        6379
5.3 create Frontend Pod and Frontend services

Modify the number of Replicas for the frontend-controller.json to 2 before creating, because our cluster only has 2 minion, if the default value of Replicas for the frontend-controller.json is 3, this will cause two pods to be scheduled to the same minion, resulting in port conflict. One Pod will remain in the pending state and cannot be scheduled.

[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 -c frontend-controller.json create replicationControllers[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 -c frontend-service.json create services

The Frontend Pod is also scheduled to two minion servers. The service IP address is 10.10.10.220 and the port is 80.

[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 list podsName                                   Image(s)                   Host                Labels                                       Status----------                             ----------                 ----------          ----------                                   ----------redis-master                           dockerfile/redis           192.168.230.4/      name=redis-master                            Running8c0ddbda-728c-11e4-8233-000c297db206   brendanburns/redis-slave   192.168.230.5/      name=redisslave,uses=redis-master            Running8c0e1430-728c-11e4-8233-000c297db206   brendanburns/redis-slave   192.168.230.4/      name=redisslave,uses=redis-master            Runninga880b119-7295-11e4-8233-000c297db206   brendanburns/php-redis     192.168.230.4/      name=frontend,uses=redisslave,redis-master   Runninga881674d-7295-11e4-8233-000c297db206   brendanburns/php-redis     192.168.230.5/      name=frontend,uses=redisslave,redis-master   Running[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 list servicesName                Labels              Selector                                  IP                  Port----------          ----------          ----------                                ----------          ----------kubernetes-ro                           component=apiserver,provider=kubernetes   10.10.10.207        80redis-master        name=redis-master   name=redis-master                         10.10.10.206        6379redisslave          name=redisslave     name=redisslave                           10.10.10.92         6379frontend            name=frontend       name=frontend                             10.10.10.220        80kubernetes                              component=apiserver,provider=kubernetes   10.10.10.161        443

In addition, you can delete the Pod, Service, and Replicas of the ReplicationController, such as deleting the Frontend Service:

[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 delete services/frontendStatus----------Success

You can also update the number of Replicas of ReplicationController. The following is the information of ReplicationController before Replicas.

[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 list replicationControllersName                   Image(s)                   Selector            Replicas----------             ----------                 ----------          ----------redisSlaveController   brendanburns/redis-slave   name=redisslave     2frontendController     brendanburns/php-redis     name=frontend       2

Now we want to update the frontendController's Replicas to 1, and then use the following command to view the frontendController information. It is found that Replicas has changed to 1.

[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 resize frontendController 1[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 list replicationControllersName                   Image(s)                   Selector            Replicas----------             ----------                 ----------          ----------redisSlaveController   brendanburns/redis-slave   name=redisslave     2frontendController     brendanburns/php-redis     name=frontend       1
5.4 demonstrate cross-machine service communication

After completing the preceding operations, we can see the Pod information running in the current Kubernetes cluster.

[root@kubernetes guestbook]# kubecfg -h http://192.168.230.3:8080 list podsName                                   Image(s)                   Host                Labels                                       Status----------                             ----------                 ----------          ----------                                   ----------a881674d-7295-11e4-8233-000c297db206   brendanburns/php-redis     192.168.230.5/      name=frontend,uses=redisslave,redis-master   Runningredis-master                           dockerfile/redis           192.168.230.4/      name=redis-master                            Running8c0ddbda-728c-11e4-8233-000c297db206   brendanburns/redis-slave   192.168.230.5/      name=redisslave,uses=redis-master            Running8c0e1430-728c-11e4-8233-000c297db206   brendanburns/redis-slave   192.168.230.4/      name=redisslave,uses=redis-master            Running

Through the above results, we can see that the pods of the current PHP and Redis master, which provide data storage services, run on 192.168.230.5 and 192.168.230.4 respectively, that is, containers run on different hosts, redis slave also runs on two different hosts, which will write data to the Redis master from the Redis master synchronization front-end. Next we will verify from two aspects that Kubernetes can provide cross-machine container communication:

  • Open http: // $ {IP in the browserAddress}: 8000, IPAddress is the IP Address of the minion running on the PHP container, and the exposed port is 8000. Here IP_Address is 192.168.230.5. The following information is displayed when you open your browser:

    You can enter and Submit information, such as "Hello Kubernetes" and "Container", and the information you entered will be displayed below the Submit button.

    Because the frontend PHP container and the backend Redis master container are on two minion instances, PHP must communicate with each other when accessing the Redis master service, it can be seen that the implementation of Kubernetes avoids the defect that link can only implement inter-container communication on the same host. The implementation of Kubernetes cross-machine communication will be described in detail later.

     

  • From the above results, we can see that cross-machine communication has been implemented. Now we verify the communication between containers on different machines from the back-end data layer. According to the above output, we found that Redis slave and Redis master are scheduled to two different minion instances, and run docker exec-ti c000011cc8971/bin/sh on host 192.168.230.4, c201711cc8971 is the container ID of the Redis master. after entering the container, run the redis-cli command to view the information entered from the browser as follows:

    If we find the same information in the Redis slave container running on 192.168.230.5 as in the Redis master container, the data synchronization between the Redis master and Redis slave works normally, the following information is queried from the Redis slave container running on 192.168.230.5:

    It can be seen that the data synchronization between the Redis master and Redis slave is normal, and the ovs gre tunnel technology enables normal communication between containers across machines.

6. Conclusion

This document describes how to deploy a Kubernetes cluster in a local environment and how to use Kubernetes to manage containers running in a cluster, and how to use OVS to manage network communication between pods of different minion clusters. Next, we will analyze the source code of each Kubernetes component and describe how Kubernetes works.

7. Personal Profile

Yang zhangxian, now working in Cisco, is mainly engaged in WebEx SaaS service O & M and system performance analysis. Pay special attention to cloud computing, automated O & M, deployment, and other technologies, especially Go, OpenvSwitch, Docker and its ecosystem technologies, such as Kubernetes, Flocker, and other open-source Docker projects. Email: yangzhangxian@gmail.com

8. References
  1. Https://n40lab.wordpress.com/2014/09/04/openvswitch-2-3-0-lts-and-centos-7/
  2. Https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook

OpenStack, Kubernetes, and Mesos

Problems encountered during Kubernetes cluster construction and Solutions

For details about Kubernetes, click here
Kubernetes: click here

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.