Before the formal introduction, it is necessary to first understand the kubernetes of several core concepts and their assumed functions. The following is an architectural design diagram for Kubernetes:
1. Pods
In the kubernetes system, the smallest particle of dispatch is not a simple container, but an abstraction into a pod,pod is a minimal deployment unit that can be created, destroyed, dispatched, and managed. such as a container or a group of containers.
2. Replication Controllers
Replication Controller is the most useful function in the kubernetes system to replicate multiple pod replicas, often one application requires multiple pods to support, and the number of replicas that can be replicated, even if the replica dispatches the assigned host to an exception, The replication controller enables the equivalent number of pods to be enabled on other host machines. Replication controller can create multiple pod replicas by Repcon templates, as well as direct replication of existing pods, which need to be associated through label selector.
3. Services
Services is the most peripheral unit of the kubernetes, through a virtual access to IP and service ports, you can access our defined pod resources, the current version is through the iptables NAT forwarding to implement, forwarding the target port is kube_proxy generated random port, Currently only provides access scheduling on Google Cloud, such as GCE. If you are integrating with our own platform, please pay attention to the next article "Kubernetes and HECD architecture integration."
4. Labels
Labels is used to differentiate between pod, service, Replication controller Key/value key-value pairs, using only the relationship recognition between pod, service, Replication Controller, However, you have to use the name tag when you operate on the cells themselves.
5. Proxy
Proxy not only solves the same service port conflict problem of the same host computer, but also provides service forwarding service port capability, the proxy backend uses stochastic, round-robin load balancing algorithm.
Talk about personal point of view, the current kubernetes to maintain a small version of the week, one months a major version of the rhythm, iterative speed is very fast, but also brought different versions of the operating methods of differences, another official website document update speed relative lag and lack of, to bring some challenges to beginners. The official focus of the upstream access layer is also on the docking optimization of GCE (Google Compute Engine), which has not yet launched a viable access solution for individual private clouds. In the v0.5 version, the service agent forwarding mechanism is referenced, and is implemented through iptables, and performance is worrisome under high concurrency. But the author is still optimistic about the future development of kubernetes, at least not yet see another into the system, with a good ecological circle platform, I believe that in the V1.0 will have the production environment service support capacity.
I. Environmental deployment
1. Platform Version Description
Centos7.0 OS
Kubernetes V0.6.2
ETCD version 0.4.6
Docker version 1.3.2
2. Platform Environment Description
3. Environmental Installation
1 system initialization work (all hosts)
System Installation-select [Minimize Installation]
# yum-y Install wget ntpdate bind-utils
# wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/epel-release-7-2.noarch.rpm
# Yum Update
CentOS 7.0 uses firewall as a firewall by default, which is changed to iptables firewall (familiarity is higher, not necessary).
1.1 Close firewall:1.1, close firewall:
# Systemctl Stop Firewalld.service #停止firewall
# systemctl Disable Firewalld.service #禁止firewall开机启动
1.2 Install iptables Firewall
# yum Install iptables-services #安装
# Systemctl Start Iptables.service #最后重启防火墙使配置生效
# Systemctl Enable Iptables.service #设置防火墙开机启动
2 Install ETCD (192.168.1.10 host)
# mkdir-p/home/install && Cd/home/install
# wget https://github.com/coreos/etcd/releases/download/v0.4.6/etcd-v0.4.6-linux-amd64.tar.gz
# TAR-ZXVF Etcd-v0.4.6-linux-amd64.tar.gz
# CD ETCD-V0.4.6-LINUX-AMD64
# CP etcd*/bin/
#/bin/etcd-version
ETCD version 0.4.6
Start the service ETCD service, and if you provide Third-party management requirements, add the "-cors= ' *" parameter to the startup parameter.
# MKDIR/DATA/ETCD
#/bin/etcd-name Etcdserver-peer-addr 192.168.1.10:7001-addr 192.168.1.10:4001-data-dir/data/etcd-peer-bind-addr 0.0.0.0:7001-bind-addr 0.0.0.0:4001 &
Configure the ETCD service firewall, where 4001 is the service port and 7001 is the cluster data interaction port.
# iptables-i input-s 192.168.1.0/24-p tcp--dport 4001-j ACCEPT
# iptables-i input-s 192.168.1.0/24-p tcp--dport 7001-j ACCEPT
3 Install kubernetes (involving all master, Minion hosts)
Installed by Yum Source, Etcd, Docker, and cadvisor related packages will be installed by default.
# Curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/ Eparis-kubernetes-epel-7-epel-7.repo-o/etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo
#yum-y Install Kubernetes
Upgrade to v0.6.2, overwriting the bin file, as follows:
# mkdir-p/home/install && Cd/home/install
# wgethttps://github.com/googlecloudplatform/kubernetes/releases/download/v0.6.2/kubernetes.tar.gz
# TAR-ZXVF Kubernetes.tar.gz
# TAR-ZXVF Kubernetes/server/kubernetes-server-linux-amd64.tar.gz
# CP kubernetes/server/bin/kube*/usr/bin
Verify the installation results and publish the following information stating that the installation is normal.
[root@sn2014-12-200 bin]#/usr/bin/kubectl version
Client version:version. Info{major: "0", Minor: "6+", Gitversion: "v0.6.2", Gitcommit: "729fde276613eedcd99ecf5b93f095b8deb64eb4", Gittreestate: "Clean"}
Server Version: &version. Info{major: "0", Minor: "6+", Gitversion: "v0.6.2", Gitcommit: "729fde276613eedcd99ecf5b93f095b8deb64eb4", Gittreestate: "Clean"}
4 kubernetes configuration (master host only)
Master runs three components, including Apiserver, scheduler, and Controller-manager, and the related configuration items only cover these three blocks.
4.1 "/etc/kubernetes/config"
# Comma seperated List of nodes in the ETCD cluster
Kube_etcd_servers= "--etcd_servers=http://192.168.1.10:4001"
# Logging to stderr means we are in the SYSTEMD journal
Kube_logtostderr= "--logtostderr=true"
# Journal message level, 0 is debug
Kube_log_level= "--v=0"
# Should This cluster is even to run privleged Docker containers
kube_allow_priv= "--allow_privileged=false"
4.2 "/etc/kubernetes/apiserver"
# the ' address ' on the ' local ' server to listen to.
Kube_api_address= "--address=0.0.0.0"
# The port on the ' local ' server to listen on.
Kube_api_port= "--port=8080"
# How the replication controller and scheduler find the Kube-apiserver
Kube_master= "--master=192.168.1.200:8080"
# Port Minions Listen on
Kubelet_port= "--kubelet_port=10250"
# address range to use for services
Kube_service_addresses= "--PORTAL_NET=10.254.0.0/16"
# ADD You own!
Kube_api_args= ""
4.3 "/etc/kubernetes/controller-manager"
# Comma seperated List of Minions
kubelet_addresses= "--machines= 192.168.1.201,192.168.1.202"
# ADD You own!
Kube_controller_manager_args= ""
4.4 "/etc/kubernetes/scheduler"
# ADD Your own!
Kube_scheduler_args= ""
Start Master side related services
# Systemctl Daemon-reload
# systemctl start Kube-apiserver.service kube-controller-manager.service kube-scheduler.service
# Systemctl Enable Kube-apiserver.service Kube-controller-manager.service Kube-scheduler.service
5 kubernetes Configuration (Minion host only)
Minion runs two components, including Kubelet, Proxy, and the related configuration items only involve these two pieces.
Docker Startup script Update
# Vi/etc/sysconfig/docker
Add:-H tcp://0.0.0.0:2375, the final configuration is as follows to provide remote API maintenance later.
Options=--selinux-enabled-h tcp://0.0.0.0:2375-h fd://
Modify the Minion firewall configuration, usually master cannot find the Minion host, mostly because the port is not connected.
Iptables-i input-s 192.168.1.200-p TCP--dport 10250-j ACCEPT
Modify the Kubernetes minion configuration to 192.168.1.201 host as an example, other Minion hosts are the same.
5.1 "/etc/kubernetes/config"
# Comma seperated List of nodes in the ETCD cluster
Kube_etcd_servers= "--etcd_servers=http://192.168.1.10:4001"
# Logging to stderr means we are in the SYSTEMD journal
Kube_logtostderr= "--logtostderr=true"
# Journal message level, 0 is debug
Kube_log_level= "--v=0"
# Should This cluster is even to run privleged Docker containers
kube_allow_priv= "--allow_privileged=false"
5.2 "/etc/kubernetes/kubelet"
###
# kubernetes Kubelet (Minion) config
# The address for the ' Info Server to serve ' (set to 0.0.0.0 or ' for all interfaces)
Kubelet_address= "--address=0.0.0.0"
# The port for the info server to serve on
Kubelet_port= "--port=10250"
# You may leave this blank to use the actual hostname
Kubelet_hostname= "--hostname_override=192.168.1.201"
# ADD Your own!
Kubelet_args= ""
5.3 "/etc/kubernetes/proxy"
Kube_proxy_args= ""
Start the Kubernetes service
# Systemctl Daemon-reload
# Systemctl Enable Docker.service Kubelet.service Kube-proxy.service
# systemctl start Docker.service kubelet.service kube-proxy.service
3. Verify the installation (in master host operation, or access to master host 8080 Port Client API host)
1) kubernetes Common commands
# Kubectl Get Minions #查查看minion主机
# Kubectl Get Pods #查看pods清单
# Kubectl Get services or Kubectl get Services-o json #查看service清单
# Kubectl Get Replicationcontrollers #查看replicationControllers清单
# for I in ' kubectl get pod|tail-n +2|awk ' {print '} '; Do kubectl delete pod $i; Done #删除所有pods
or through the server API for rest (recommended, more timely):
# curl-s-L http://192.168.1.200:8080/api/v1beta1/version | Python-mjson.tool #查看kubernetes版本
# curl-s-L Http://192.168.1.200:8080/api/v1beta1/pods | Python-mjson.tool #查看pods清单
# curl-s-L http://192.168.1.200:8080/api/v1beta1/replicationControllers | Python-mjson.tool #查看replicationControllers清单
# curl-s-L http://192.168.1.200:8080/api/v1beta1/minions | Python-m Json.tool #查查看minion主机
# curl-s-L http://192.168.1.200:8080/api/v1beta1/services | Python-m Json.tool #查看service清单
Note: In the new kubernetes, all operation commands are integrated into KUBECTL, including Kubecfg, kubectl.sh, kubecfg.sh, etc.
2 Create Test pod unit
#/home/kubermange/pods && Cd/home/kubermange/pods
# VI Apache-pod.json
{
"id": "Fedoraapache",
"Kind": "Pod",
"Apiversion": "V1beta1",
"Desiredstate": {
"Manifest": {
"Version": "V1beta1",
"id": "Fedoraapache",
"Containers": [{
"Name": "Fedoraapache",
"Image": "Fedora/apache",
"Ports": [{
"Containerport": 80,
"Hostport": 8080
}]
}]
}
},
"Labels": {
' Name ': ' Fedoraapache '
}
}
# Kubectl Create-f Apache-pod.json
# Kubectl Get pod
NAME IMAGE (S) HOST LABELS STATUS
Fedoraapache Fedora/apache 192.168.1.202/name=fedoraapache Running
To start the browser access http://192.168.1.202:8080/, the corresponding service port is remembered to have been added in Iptables. The effect chart is as follows:
Observing the data storage structure of kubernetes in ETCD
Observe a single pods data storage structure, stored in JSON format.
Second, the actual operation
Task: To create a service cluster of LNMP architecture through kubernetes, and to observe its load balancing, involving mirroring "yorko/webserver" has been pushed to registry.hub.docker.com, you can go through "Docker pull Yorko/webserver "Download.
# mkdir-p/home/kubermange/replication && mkdir-p/home/kubermange/service
# cd/home/kubermange/replication
1. Create a replication, this example creates the pod and replicates directly in the replication template, or it can create the pod separately and reproduce it by replication.
"Replication/lnmp-replication.json"
{
"id": "Webservercontroller",
"Kind": "Replicationcontroller",
"Apiversion": "V1beta1",
' Labels ': {' name ': ' Webserver '},
"Desiredstate": {
"Replicas": 2,
' Replicaselector ': {' name ': ' Webserver_pod '},
"Podtemplate": {
"Desiredstate": {
"Manifest": {
"Version": "V1beta1",
"id": "webserver",
"Volumes": [
{' name ': ' httpconf ', ' source ': {' hostdir ': {' path ': '/etc/httpd/conf '}}},
{' name ': ' httpconfd ', ' source ': {' hostdir ': {' path ': '/etc/httpd/conf.d '}}},
{' name ': ' Httproot ', ' source ': {' hostdir ': {' path ': '/data '}}}
],
"Containers": [{
"Name": "Webserver",
"Image": "Yorko/webserver",
"Command": ["/bin/sh", "-C", "/usr/bin/supervisord-c/etc/supervisord.conf"],
"Volumemounts": [
{"Name": "Httpconf", "Mountpath": "/etc/httpd/conf"},
{"Name": "HTTPCONFD", "Mountpath": "/ETC/HTTPD/CONF.D"},
{' name ': ' Httproot ', ' mountpath ': '/data '}
],
"CPU": 100,
"Memory": 50000000,
"Ports": [{
"Containerport": 80,
},{
"Containerport": 22,
}]
}]
}
},
' Labels ': {' name ': ' Webserver_pod '},
},
}
}
Execute Create command
#kubectl create-f Lnmp-replication.json
Observe the generated POD copy list:
[root@sn2014-12-200 replication]# kubectl get pod
NAME IMAGE (S) HOST LABELS STATUS
84150ab7-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.202/name=webserver_pod Running
84154ed5-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.201/name=webserver_pod Running
840beb1b-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.202/name=webserver_pod Running
84152d93-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.202/name=webserver_pod Running
840db120-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.201/name=webserver_pod Running
8413b4f3-89f8-11e4-970d-000c292f1620 Yorko/webserver 192.168.1.201/name=webserver_pod Running
2. Create a service by selector specify "name": "Webserver_pod" associated with pods.
"Service/lnmp-service.json"
{
"id": "webserver",
"Kind": "Service",
"Apiversion": "V1beta1",
"Selector": {
"Name": "Webserver_pod",
},
"Kyoto": "TCP",
"Containerport": 80,
"Port": 8080
}
To execute the CREATE command:
# Kubectl Create-f Lnmp-service.json
Log on to the Minion host (192.168.1.201) and query the Iptables forwarding rules generated by the host computer (last line)
# iptables-nvl-t NAT
Chain Kube-proxy (2 references)
Pkts bytes Target prot opt in Out source destination
2 REDIRECT TCP--* * 0.0.0.0/0 10.254.102.162/* kubernetes/TCP dpt:443 redir ports 47700
1 REDIRECT TCP--* * 0.0.0.0/0 10.254.28.74/* Kubernetes-ro/TCP dpt:80 redir ports 60099
0 0 REDIRECT TCP--* * 0.0.0.0/0 10.254.216.51/* webserver/TCP dpt:8080 redir ports 40689
Access test, http://192.168.1.201:40689/info.php, refresh the browser to discover the change of proxy backend, default to random round robin algorithm.
III. Testing Process
1.pods automatic copy, destroy test, observation kubernetes automatic keep copy number (6)
Delete a replica in replicationcontrollers Fedoraapache
[root@sn2014-12-200 pods]# kubectl Delete pods Fedoraapache
I1219 23:59:39.305730 9516 restclient.go:133] Waiting for completion of twist 142530
Fedoraapache
[root@sn2014-12-200 pods]# Kubectl get Pods
NAME IMAGE (S) HOST LABELS STATUS
5d70892e-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running
5d715e56-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running
5d717f8d-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running
5d71c584-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running
5d71a494-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running
#自动生成出一个副本, keep the effect of 6
[root@sn2014-12-200 pods]# Kubectl get Pods
NAME IMAGE (S) HOST LABELS STATUS
5d717f8d-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running
5d71c584-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running
5d71a494-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running
2a8fb993-8798-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running
5d70892e-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.201/name=fedoraapache Running
5d715e56-8794-11e4-970d-000c292f1620 Fedora/apache 192.168.1.202/name=fedoraapache Running
2. Test the Hostport in different role modules
1 pod hostport is empty, and Replicationcontrollers is the specified port, then the exception, both sides of the specified port, the same or not all abnormal, pod Hostport for the specified, the other Replicationcon is empty, then normal; Pod Hostport is empty, the other Replicationcon is empty, then normal; the conclusion is that the Hostport cannot be specified in the replicationcontrollers scene, otherwise the exception is to be tested continuously.
2) Conclusion: In Replicationcontronllers.json, "Replicaselector": {"name": "Webserver_pod"} to be with "labels": {"name": "Webserver_ Pod "} and" selector "in service: {" name ":" Webserver_pod "} consistent;
In the next article, Liu Tians plans to introduce "kubernetes and HECD Architecture integration", please look forward to!
References:
Kubernetes fedora_manual_config.md
Kubernetes/design.md
Introduction to Kubernetes system architecture