In-depth analysis of kubernetes construction of Docker Cluster Management tutorial

Source: Internet
Author: User
Tags curl mkdir centos iptables etcd

First, the preface
Kubernetes is Google's Open source container cluster management system, based on Docker to build a container scheduling services, providing resource scheduling, equalization disaster tolerance, service registration, dynamic expansion capacity and other functional kits, the latest version of the current 0.6.2. This article describes how to build a Kubernetes platform based on Centos7.0, before the formal introduction, it is necessary to understand the kubernetes of several core concepts and their assumed functions. The following is an architectural design diagram for Kubernetes:



1. Pods

In the kubernetes system, the smallest particle of dispatch is not a simple container, but an abstraction into a pod,pod is a minimal deployment unit that can be created, destroyed, dispatched, and managed. such as a container or a group of containers.

2. Replication Controllers
Replication Controller is the most useful function in the kubernetes system to replicate multiple pod replicas, often one application requires multiple pods to support it, and can guarantee the number of copies it replicates, even if the master host computer that the replica dispatches is assigned an exception. The replication controller enables the equivalent number of pods to be enabled on other host machines. Replication controller can create multiple pod replicas by Repcon templates, as well as direct replication of existing pods, which need to be associated through label selector.

3. Services

Services is the most peripheral unit of the kubernetes, through a virtual access to IP and service ports, you can access our defined pod resources, the current version is through the iptables NAT forwarding to implement, forwarding the target port is kube_proxy generated random port, Currently only provides access scheduling on Google Cloud, such as GCE. What if we integrate with our own platform? Please pay attention to the next article "Kubernetes and HECD architecture integration."

4, Labels

The labels is used to differentiate between pod, service, Replication controller Key/value key-value pairs, using only the relationship recognition between pod, service, Replication Controller, However, you have to use the name tag when you operate on the cells themselves.
5. Proxy

Proxy not only solves the same service port conflict problem of the same host computer, but also provides service forwarding service port capability, the proxy backend uses stochastic, round-robin load balancing algorithm.

Talk about personal point of view, the current kubernetes to maintain a small version of the week, one months a major version of the rhythm, iterative speed is very fast, but also brought different versions of the operating methods of differences, another official website document update speed relative lag and lack of, to bring some challenges to beginners. The official focus of the upstream access layer is also on the docking optimization of GCE (Google Compute Engine), which has not yet launched a viable access solution for individual private clouds. In the v0.5 version, the service agent forwarding mechanism is referenced, and is implemented through iptables, and performance is worrisome under high concurrency. But the author is still optimistic about the future development of kubernetes, at least not yet see another into the system, with a good ecological circle platform, I believe that in the V1.0 will have the production environment service support capacity.

I. Environmental deployment

1, Platform version description
1) Centos7.0 OS
2) Kubernetes V0.6.2
3) ETCD version 0.4.6
4) Docker version 1.3.2

2, Platform Environment description


3. Environmental Installation

1 system initialization work (all hosts)
System Installation-select [Minimize Installation]

# yum-y Install wget ntpdate bind-utils
# wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/epel-release-7-2.noarch.rpm
# Yum Update

CentOS 7.0 uses firewall as a firewall by default, which is changed to iptables firewall (familiarity is higher, not necessary).
1.1. Close firewall:

Reference

# Systemctl Stop Firewalld.service #停止firewall
# systemctl Disable Firewalld.service #禁止firewall开机启动

1.2. Install iptables Firewall


# yum Install iptables-services #安装
# Systemctl Start Iptables.service #最后重启防火墙使配置生效
# Systemctl Enable Iptables.service #设置防火墙开机启动

2) Install ETCD (192.168.1.10 host)


# mkdir-p/home/install && Cd/home/install
# wget https://github.com/coreos/etcd/releases/download/v0.4.6/etcd-v0.4.6-linux-amd64.tar.gz
# TAR-ZXVF Etcd-v0.4.6-linux-amd64.tar.gz
# CD ETCD-V0.4.6-LINUX-AMD64
# CP etcd*/bin/
#/bin/etcd-version
ETCD version 0.4.6

Start the service ETCD service, and if you provide Third-party management requirements, add the "-cors= ' *" parameter to the startup parameter.


# MKDIR/DATA/ETCD
#/bin/etcd-name Etcdserver-peer-addr 192.168.1.10:7001-addr 192.168.1.10:4001-data-dir/data/etcd-peer-bind-addr 0. 0.0.0:7001-bind-addr 0.0.0.0:4001 &

Configure the ETCD service firewall, where 4001 is the service port and 7001 is the cluster data interaction port.


# iptables-i input-s 192.168.1.0/24-p tcp--dport 4001-j ACCEPT
# iptables-i input-s 192.168.1.0/24-p tcp--dport 7001-j ACCEPT


3 Install kubernetes (involving all master, Minion hosts)
Installed by Yum Source, Etcd, Docker, and cadvisor related packages will be installed by default.

Reference

# Curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/ Eparis-kubernetes-epel-7-epel-7.repo-o/etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo
#yum-y Install Kubernetes

Upgrade to v0.6.2, overwriting the bin file, as follows:


# mkdir-p/home/install && Cd/home/install
# wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.6.2/kubernetes.tar.gz
# TAR-ZXVF Kubernetes.tar.gz
# TAR-ZXVF Kubernetes/server/kubernetes-server-linux-amd64.tar.gz
# CP kubernetes/server/bin/kube*/usr/bin

Verify the installation results and publish the following information stating that the installation is normal.


[root@sn2014-12-200 bin]#/usr/bin/kubectl version
Client version:version. Info{major: "0", Minor: "6+", Gitversion: "v0.6.2", Gitcommit: "729fde276613eedcd99ecf5b93f095b8deb64eb4", Gittreestate: "Clean"}
Server Version: &version. Info{major: "0", Minor: "6+", Gitversion: "v0.6.2", Gitcommit: "729fde276613eedcd99ecf5b93f095b8deb64eb4", Gittreestate: "Clean"}

4 kubernetes configuration (master host only)

Master runs three components, including Apiserver, scheduler, and Controller-manager, and the related configuration items only cover these three blocks.

4.1, "/etc/kubernetes/config"


# Comma seperated List of nodes in the ETCD cluster
Kube_etcd_servers= "--etcd_servers=http://192.168.1.10:4001"

# Logging to stderr means we are in the SYSTEMD journal
Kube_logtostderr= "--logtostderr=true"

# Journal message level, 0 is debug
Kube_log_level= "--v=0"

# Should This cluster is allowed to run privleged Docker containers
kube_allow_priv= "--allow_privileged=false"

4.2, "/etc/kubernetes/apiserver"

# the ' address ' on the ' local ' server to listen to.
Kube_api_address= "--address=0.0.0.0"

# The port on the ' local ' server to listen on.
Kube_api_port= "--port=8080"

# How the replication controller and scheduler find the Kube-apiserver
Kube_master= "--master=192.168.1.200:8080"

# Port Minions Listen on
Kubelet_port= "--kubelet_port=10250"

# address range to use for services
Kube_service_addresses= "--PORTAL_NET=10.254.0.0/16"

# ADD You own!
Kube_api_args= ""

4.3, "/etc/kubernetes/controller-manager"

# Comma seperated List of Minions
kubelet_addresses= "--machines= 192.168.1.201,192.168.1.202"

# ADD You own!
Kube_controller_manager_args= ""

4.4, "/etc/kubernetes/scheduler"

# ADD Your own!
Kube_scheduler_args= ""

Start Master side related services


# Systemctl Daemon-reload
# systemctl start Kube-apiserver.service kube-controller-manager.service kube-scheduler.service
# Systemctl Enable Kube-apiserver.service Kube-controller-manager.service Kube-scheduler.service

5) kubernetes configuration (Minion host only)

Minion runs two components, including Kubelet, Proxy, and the related configuration items only involve these two pieces.
Docker Startup script Update
# Vi/etc/sysconfig/docker
Add:-H tcp://0.0.0.0:2375, the final configuration is as follows to provide remote API maintenance later.
Options=--selinux-enabled-h tcp://0.0.0.0:2375-h fd://

Modify the Minion firewall configuration, usually master cannot find the Minion host, mostly because the port is not connected.
Iptables-i input-s 192.168.1.200-p TCP--dport 10250-j ACCEPT

Modify the Kubernetes minion configuration to 192.168.1.201 host as an example, other Minion hosts are the same.

5.1, "/etc/kubernetes/config"


# Comma seperated List of nodes in the ETCD cluster
Kube_etcd_servers= "--etcd_servers=http://192.168.1.10:4001"

# Logging to stderr means we are in the SYSTEMD journal
Kube_logtostderr= "--logtostderr=true"

# Journal message level, 0 is debug
Kube_log_level= "--v=0"

# Should This cluster is allowed to run privleged Docker containers
kube_allow_priv= "--allow_privileged=false"

5.2, "/etc/kubernetes/kubelet"


###
# kubernetes Kubelet (Minion) config

# The address for the ' Info Server to serve ' (set to 0.0.0.0 or ' for all interfaces)
Kubelet_address= "--address=0.0.0.0"

# The port for the info server to serve on
Kubelet_port= "--port=10250"

# You may leave this blank to use the actual hostname
Kubelet_hostname= "--hostname_override=192.168.1.201"

# ADD Your own!
Kubelet_args= ""

5.3, "/etc/kubernetes/proxy"
View Plainprint?
Kube_proxy_args= ""

Start the Kubernetes service

Reference

# Systemctl Daemon-reload
# Systemctl Enable Docker.service Kubelet.service Kube-proxy.service
# systemctl start Docker.service kubelet.service kube-proxy.service

3, verify the installation (in master host operation, or access to master host 8080 Port Client API host)
1) kubernetes Common commands


# Kubectl Get Minions #查查看minion主机
# Kubectl Get Pods #查看pods清单
# Kubectl Get services or Kubectl get Services-o json #查看service清单
# Kubectl Get Replicationcontrollers #查看replicationControllers清单
# for I in ' kubectl get pod|tail-n +2|awk ' {print '} '; Do kubectl delete pod $i; Done #删除所有pods

or through the server API for rest (recommended, more timely):


# curl-s-L http://192.168.1.200:8080/api/v1beta1/version | Python-mjson.tool #查看kubernetes版本
# curl-s-L Http://192.168.1.200:8080/api/v1beta1/pods | Python-mjson.tool #查看pods清单
# curl-s-L http://192.168.1.200:8080/api/v1beta1/replicationControllers | Python-mjson.tool #查看replicationControllers清单
# curl-s-L http://192.168.1.200:8080/api/v1beta1/minions | Python-m Json.tool #查查看minion主机
# curl-s-L http://192.168.1.200:8080/api/v1beta1/services | Python-m Json.tool #查看service清单

Note: In the new kubernetes, all operation commands are integrated into KUBECTL, including Kubecfg, kubectl.sh, kubecfg.sh, etc.

2) Create Test pod unit


#/home/kubermange/pods && Cd/home/kubermange/pods


# VI Apache-pod.json


View Plainprint?


{


"id": "Fedoraapache",


"Kind": "Pod",


"Apiversion": "V1beta1",


"Desiredstate": {


"Manifest": {


"Version": "V1beta1",


"id": "Fedoraapache",


"Containers": [{


"Name": "Fedoraapache",


"Image": "Fedora/apache",


"Ports": [{


"Containerport": 80,


"Hostport": 8080


}]


}]


}


},


"Labels": {


' Name ': ' Fedoraapache '


}


}

# Kubectl Create-f Apache-pod.json
# Kubectl Get pod

NAME IMAGE (S) HOST LABELS STATUS
Fedoraapache Fedora/apache 192.168.1.202/name=fedoraapache Running

To start the browser access http://192.168.1.202:8080/, the corresponding service port is remembered to have been added in Iptables. The effect chart is as follows:



To observe the data storage structure of kubernetes in ETCD



Observe a single pods data storage structure, stored in JSON format.


Second, the actual operation

Task: To create a service cluster of LNMP architecture through kubernetes, and to observe its load balancing, involving mirroring "yorko/webserver" has been pushed to registry.hub.docker.com, you can go through "Docker pull Yorko/webserver "Download.


# mkdir-p/home/kubermange/replication && mkdir-p/home/kubermange/service
# cd/home/kubermange/replication

1, create a replication, this example directly in the replication template to create a pod and replication, but also independently create the pod and then through the replication to replicate.




"Replication/lnmp-replication.json"





{


"id": "Webservercontroller",


"Kind": "Replicationcontroller",


"Apiversion": "V1beta1",


' Labels ': {' name ': ' Webserver '},


"Desiredstate": {


"Replicas": 2,


' Replicaselector ': {' name ': ' Webserver_pod '},


"Podtemplate": {


"Desiredstate": {


"Manifest": {


"Version": "V1beta1",


"id": "webserver",


"Volumes": [


{' name ': ' httpconf ', ' source ': {' hostdir ': {' path ': '/etc/httpd/conf '}}},


{' name ': ' httpconfd ', ' source ': {' hostdir ': {' path ': '/etc/httpd/conf.d '}}},


{' name ': ' Httproot ', ' source ': {' hostdir ': {' path ': '/data '}}}


],


"Containers": [{


"Name": "Webserver",


"Image": "Yorko/webserver",


"Command": ["/bin/sh", "-C", "/usr/bin/supervisord-c/etc/supervisord.conf"],


"Volumemounts": [


{"Name": "Httpconf", "Mountpath": "/etc/httpd/conf"},


{"Name": "HTTPCONFD", "Mountpath": "/ETC/HTTPD/CONF.D"},


{' name ': ' Httproot ', ' mountpath ': '/data '}


],


"CPU": 100,


"Memory": 50000000,


"Ports": [{


"Containerport": 80,


},{


"Containerport": 22,


}]


}]


}


},


' Labels ': {' name ': ' Webserver_pod '},


},


}


}

Execute Create command

#kubectl create-f Lnmp-replication.json

Observe the generated POD copy list:
[root@sn2014-12-200 replication]# kubectl get pod

NAME IMAGE (S) HOST LABELS STATUS


84150ab7-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.202/name=webserver_pod Running


84154ed5-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.201/name=webserver_pod Running


840beb1b-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.202/name=webserver_pod Running


84152d93-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.202/name=webserver_pod Running


840db120-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.201/name=webserver_pod Running


8413b4f3-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.201/name=webserver_pod Running

2, create a service, through selector specify "name": "Webserver_pod" and Pods Association.


"Service/lnmp-service.json"
View Plainprint?
{
"id": "webserver",
"Kind": "Service",
"Apiversion": "V1beta1",
"Selector": {
"Name": "Webserver_pod",
},
"Protocol": "TCP",
"Containerport": 80,
"Port": 8080
}

To execute the CREATE command:
# Kubectl Create-f Lnmp-service.json


Log on to the Minion host (192.168.1.201) and query the Iptables forwarding rules generated by the host computer (last line)
# iptables-nvl-t NAT




Chain Kube-proxy (2 references)


Pkts bytes Target prot opt in Out source destination


2 REDIRECT TCP--* * 0.0.0.0/0 10.254.102.162/* kubernetes/tcp dpt:443 redir PO RTS 47700


1 REDIRECT TCP--* * 0.0.0.0/0 10.254.28.74/* Kubernetes-ro/TCP dpt:80 redir Ports 60099


0 0 REDIRECT TCP--* * 0.0.0.0/0 10.254.216.51/* webserver/tcp dpt:8080 redir PO RTS 40689

Access test, http://192.168.1.201:40689/info.php, refresh the browser to discover the change of proxy backend, default to random round robin algorithm.




Third, the testing process


1, Pods automatic copy, destroy test, observation kubernetes automatically keep the number of copies (6)
Delete a replica in replicationcontrollers Fedoraapache
[root@sn2014-12-200 pods]# kubectl Delete pods Fedoraapache
I1219 23:59:39.305730 9516 restclient.go:133] Waiting for completion of Operation 142530
Fedoraapache

[root@sn2014-12-200 pods]# Kubectl get Pods


NAME IMAGE (S) HOST LABELS STATUS


5d70892e-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running


5d715e56-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running


5d717f8d-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running


5d71c584-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running


5d71a494-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running

#自动生成出一个副本, keep 6 copies of the effect

[root@sn2014-12-200 pods]# Kubectl get Pods


NAME IMAGE (S) HOST LABELS STATUS


5d717f8d-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running


5d71c584-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running


5d71a494-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running


2a8fb993-8798-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running


5d70892e-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running


5d715e56-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running

2. Test the Hostport in different role modules

    1) hostport is empty in pod, and replicationcontrollers is specified port, exception is specified, port is assigned on both sides, same or not all abnormal; pod's hostport is specified, The other Replicationcon is empty, then normal, pod Hostport is empty, the other Replicationcon is empty, then normal; the conclusion is that Hostport cannot be specified in the replicationcontrollers scene, otherwise the exception is to be tested continuously.
    2) Conclusion: In Replicationcontronllers.json, "Replicaselector": {"name": "Webserver_pod"} To and "labels" : {"name": "Webserver_pod"} and "selector" in service: {"name": "Webserver_pod"} is consistent;

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.