Kubernetes Multi-node deployment resolution

Source: Internet
Author: User
Tags etcd





Note: The following operations are based on the CENTOS7 system.

Kubernetes pause

Installing Ansible


Ansilbe can be installed via Yum or Pip, and because Kubernetes-ansible uses the password, it needs to be installed Sshpass:

Kubernetes pause pod

pip install ansiblewget http://sourceforge.net/projects/sshpass/files/latest/downloadtar zxvf downloadcd sshpass-1.05./configure && make && make install

Configure Kubernetes-ansible
# git clone https://github.com/eparis/kubernetes-ansible.git# cd kubernetes-ansible# #在group_vars/all.yml中配置用户为root# cat group_vars/all.yml | grep sshansible_ssh_user: root# # Each kubernetes service gets its own IP address. These are not real IPs. # # You need only select a range of IPs which are not in use elsewhere in your# # environment. This must be done even if you do not use the network setup # # provided by the ansible scripts.# cat group_vars/all.yml | grep kube_service_addresseskube_service_addresses: 10.254.0.0/16# #配置root密码# echo "password" > ~/rootpassword



Configure IP addresses for master, ETCD, and Minion: pause pod kubernetes



# cat inventory[masters]192.168.0.7[etcd]192.168.0.7[minions]# kube_ip_addr为该minion上Pods的地址池,默认为/24掩码192.168.0.3  kube_ip_addr=10.0.1.1 192.168.0.6  kube_ip_addr=10.0.2.1



Test each machine to connect and configure SSH key:



# ansible-playbook -i inventory ping.yml #这个命令会输出一些错误信息,可忽略# ansible-playbook -i inventory keys.yml



At present kubernetes-ansible is not very comprehensive to rely on processing, need to first manually configure under:



# # 安装iptables# ansible all -i inventory --vault-password-file=~/rootpassword -a ‘yum -y install iptables-services‘# # 为CentOS 7添加kubernetes源# ansible all -i inventory --vault-password-file=~/rootpassword -a ‘curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/eparis-kubernetes-epel-7-epel-7.repo -o /etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo‘# # 配置ssh,防止ssh连接超时# sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config # ansible all -i inventory --vault-password-file=~/rootpassword -a ‘sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config‘# ansible all -i inventory --vault-password-file=~/rootpassword -a ‘sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/sshd_config‘# ansible all -i inventory --vault-password-file=~/rootpassword -a ‘systemctl restart sshd‘



Configuring a docker network is essentially creating a kbr0 bridge, configuring IP for the bridge, and configuring the route:



# ansible-playbook-i Inventory Hack-network.yml PLAY [minions] ***************************************************** Gathering FACTS *************************************************************** OK: [192.168.0.6]ok: [ 192.168.0.3]task: [Network-hack-bridge | Create Kubernetes Bridge Interface] ************** changed: [192.168.0.3]changed: [192.168.0.6]task: [ Network-hack-bridge | Configure Docker to use the bridge inferface] * * * * * * changed: [192.168.0.6]changed: [192.168.0.3]play [Minions] ************ Gathering FACTS ******************************************** OK: [192.168.0.6]ok: [192.168.0.3]task: [network-hack-routes |] Stat path=/etc/sysconfig/ Network-scripts/ifcfg-{{Ansible_default_ipv4.interface}}] * * * OK: [192.168.0.6]ok: [192.168.0.3]task: [ Network-hack-routes | Set up a network config file] ******************** skipping: [192.168.0.3]skipping: [192.168.0.6]task: [ Network-hack-routes | Set upA static routing table] ******************* changed: [192.168.0.3]changed: [192.168.0.6]notified: [Network-hack-routes | Apply changes] ******************************* changed: [192.168.0.6]changed: [192.168.0.3]notified: [ Network-hack-routes | Upload script] ******************************* changed: [192.168.0.6]changed: [192.168.0.3]notified: [ Network-hack-routes | Run script] ********************************** changed: [192.168.0.3]changed: [192.168.0.6]notified: [ Network-hack-routes | Remove script] ******************************* changed: [192.168.0.3]changed: [192.168.0.6]play RECAP *************** 192.168.0.3:ok=10 changed=7 unreachable=0 F   Ailed=0 192.168.0.6:ok=10 changed=7 unreachable=0 failed=0


Finally, install and configure kubernetes on all nodes:



ansible-playbook -i inventory setup.yml



After execution, you can see that the Kube related services are running:



# # Service Run status # ansible all-i inventory-k-a ' bash-c ' Systemctl | Grep-i kube "' SSH password:192.168.0.3 | Success |  Rc=0 >>kube-proxy.service Loaded                                                                                        Active running Kubernetes kube-proxy serverkubelet.service Loaded active running Kubernetes kubelet Server192.168.0.7 | Success | Rc=0 >>kube-apiserver.service loaded active running Kubernete  s API serverkube-controller-manager.service loaded active running Kubernetes Controller Managerkube-scheduler.service loaded active running Kub Ernetes Scheduler Plugin192.168.0.6 | Success |  Rc=0 >>kube-proxy.service Loaded Active running Kubernetes kube-proxy serverkubelet.service Loaded active running Kubernetes kubelet server# # port listening Status # ansible all-i inventory-k-a ' bash -C "NETSTAT-TULNP | GREP-E \ "(kube) | (ETCD) \ "" ' SSH password:192.168.0.7 | Success |        Rc=0 >>tcp 0 0 192.168.0.7:7080 0.0.0.0:* LISTEN 14486/kube-apiserve TCP 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 14544/kube-schedule TCP 0 0 127.0.0.1:1                    0252 0.0.0.0:* LISTEN 14515/kube-controll tcp6 0 0::: 7001:::*      LISTEN 13986/ETCD tcp6 0 0::: 4001:::* LISTEN 13986/etcd TCP6 0 0::: 8080:::* LISTEN 14486/kube-apiser ve 192.168.0.3 | Success | Rc=0 >>tcp 0 0192.168.0.3:10250 0.0.0.0:* LISTEN 9500/kubelet tcp6 0 0::: 46309                    :::* LISTEN 9524/kube-proxy tcp6 0 0::: 48500:::* LISTEN 9524/kube-proxy tcp6 0 0::: 38712:::* LISTEN 9524/k Ube-proxy 192.168.0.6 | Success |       Rc=0 >>tcp 0 0 192.168.0.6:10250 0.0.0.0:* LISTEN 9474/kubelet tcp6                0 0::: 52870:::* LISTEN 9498/kube-proxy tcp6 0 0::: 57961                    :::* LISTEN 9498/kube-proxy tcp6 0 0::: 40720:::*   LISTEN 9498/kube-proxy








Execute the following command to see if the service is normal



# curl -s -L http://192.168.0.7:4001/version # check etcdetcd 0.4.6# curl -s -L http://192.168.0.7:8080/api/v1beta1/pods  | python -m json.tool # check apiserve{    "apiVersion": "v1beta1",    "creationTimestamp": null,    "items": [],    "kind": "PodList",    "resourceVersion": 8,    "selfLink": "/api/v1beta1/pods"}# curl -s -L http://192.168.0.7:8080/api/v1beta1/minions  | python -m json.tool # check apiserve# curl -s -L http://192.168.0.7:8080/api/v1beta1/services  | python -m json.tool # check apiserve# kubectl get minionsNAME192.168.0.3192.168.0.6

Deploying Apache Services


First create a pod:



# cat ~/apache.json{"id": "Fedoraapache", "kind": "Pod", "apiversion": "V1beta1", "desiredstate": {"manifest": {  "Version": "V1beta1", "id": "Fedoraapache", "containers": [{"Name": "Fedoraapache", "image": "Fedora/apache", "ports": [{"Containerport": +, "Hostport": +}]}]}}, "La BELs ": {" name ":" Fedoraapache "}}# kubectl create-f apache.json# kubectl get pod fedoraapachename IMA        GE (S) HOST LABELS statusfedoraapache Fedora/apache 192.168.0.6/                Name=fedoraapache waiting# #由于镜像下载较慢, so the waiting lasts longer, and when the mirror is good, it will get up soon. # kubectl Get pod fedoraapachename        IMAGE (S) HOST LABELS statusfedoraapache Fedora/apache 192.168.0.6/             Name=fedoraapache running# #到192.168.0.6 machine Look at the container status # Docker Pscontainer ID IMAGE COMMAND       CREATED      STATUS PORTS names77dd7fe1b24f fedora/apache:latest "/run-apache.sh" to M Inutes ago up to minutes k8s_fedoraapache.f14c9521_fedoraapache.default.etcd_1416396375_41 14a4d0 1455249f2c7d kubernetes/pause:latest "/pause" about a hour ago up about an hour 0.0.0.0                 : 80->80/tcp K8S_NET.E9A68336_FEDORAAPACHE.DEFAULT.ETCD_1416396375_11274CD2 # docker Imagesrepository TAG        IMAGE ID CREATED VIRTUAL sizefedora/apache latest 2E11D8FD18B3 7 weeks ago 554.1 mbkubernetes/pause latest 6c4579af347b 4 months ago 239.8 kb# Iptables-save | grep 2.2-a DOCKER! -I kbr0-p tcp-m tcp--dport 80-j DNAT--to-destination 10.0.2.2:80-a forward-d 10.0.2.2/32! -I kbr0-o kbr0-p tcp-m tcp--dport 80-j accept# Curl localhost # description pod boot OK and port normal Apache
Replication Controllers


The Replication controllers guarantees a sufficient number of containers to run in order to equalize the load and ensure that the service is highly available:



A replication controller combines a template for pod creation (a "cookie-cutter" if you'll) and a number of desired REPL ICAS, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller . The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.

# cat Replica.json {"id": "Apachecontroller", "kind": "Replicationcontroller", "apiversion": "V1beta1", "labels": {"n Ame ":" Fedoraapache "}," Desiredstate ": {" Replicas ": 3," Replicaselector ": {" name ":" Fedoraapache "}," Podtemplat           E ": {" desiredstate ": {" manifest ": {" version ":" V1beta1 "," id ":" Fedoraapache ",               "Containers": [{"Name": "Fedoraapache", "image": "Fedora/apache", "ports": [{      "Containerport": +,}]}}}, "labels": {"name": "Fedoraapache"},            },}}# kubectl create-f replica.json apachecontroller# kubectl get replicationcontrollername IMAGE (S)                                   SELECTOR replicasapachecontroller fedora/apache name=fedoraapache 3# kubectl get Podname                           IMAGE (S) HOST LABELS Statusfedoraapache fedora/Apache 192.168.0.6/name=fedoraapache runningcf6726ae-6fed-11e4-8a06-fa163e3873e1 Fedora/apache 192        .168.0.3/name=fedoraapache runningcf679152-6fed-11e4-8a06-fa163e3873e1 Fedora/apache 192.168.0.3/ Name=fedoraapache Running


As you can see, there are already three of containers running.



Services


There are already multiple pods running through replication controllers, but since each pod is assigned a different IP, and as the system runs these IP addresses are likely to change, then how do you access this service from outside? This is what the service is doing.





A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access Them-sometime s called a micro-service. The goal of services is to provide a bridge for non-kubernetes-native applications to access backends without the need to Write code is specific to Kubernetes. A Service offers clients a IP and port pair which, when accessed, redirects to the appropriate backends. The set of Pods targetted is determined by a label selector.





As an example, consider an image-process backend which are running with 3 live replicas. Those replicas is fungible-frontends do don't care which backend they use. While the actual pods that comprise the set could change, the Frontend client (s) does not need to know that. The service abstraction enables this decoupling.





Unlike pod IP addresses, which actually route to a fixed destination, service IPs is not actually answered by a single Ho St. Instead, we use iptables (packet processing logic on Linux) to define "virtual" IP addresses which is transparently redir Ected as needed. We call the tuple of the service IP and the service port the portal. When clients connect to the portal, their traffic is automatically transported to an appropriate endpoint. The environment variables for services is actually populated in terms of the portal IP and port. We'll be adding DNS support for services, too.



# cat service.json {  "id": "fedoraapache",  "kind": "Service",  "apiVersion": "v1beta1",  "selector": {    "name": "fedoraapache",  },  "protocol": "TCP",  "containerPort": 80,  "port": 8987}# kubectl create -f service.json fedoraapache# kubectl get serviceNAME                LABELS              SELECTOR                                  IP                  PORTkubernetes-ro                           component=apiserver,provider=kubernetes   10.254.0.2          80kubernetes                              component=apiserver,provider=kubernetes   10.254.0.1          443fedoraapache                            name=fedoraapache                         10.254.0.3          8987# # 切换到minion上# curl 10.254.0.3:8987Apache



You can also configure a public IP for the service, provided that you configure a cloud provider. The currently supported cloud provider are GCE, AWS, OpenStack, Ovirt, vagrant, and more.



For some parts of your application (e.g. your frontend) the want to expose a service in an external (publically visible) I P address. To achieve this, you can set the CREATEEXTERNALLOADBALANCER flag on the service. This sets-a cloud provider specific load balancer (assuming that it's supported by your cloud provider) and also sets Up IPTables rules on each host this map packets from the specified External IP address to the service proxy in the same MA Nner as internal service IP addresses.

Note: Support for OpenStack is done using Rackspace Open source Github.com/rackspace/gophercloud,

Health Check


Currently, there is three types of application health checks that's can choose from:
HTTP Health Checks-the Kubelet would call a web hook. If it returns between and 399, it is considered success, failure otherwise.
Container exec-the Kubelet would execute a command inside your Container. If it returns "OK" it'll be considered a success.
* TCP socket-the Kubelet would attempt to open a Socket to your container. If it can establish a connection, the container is considered healthy, if it can ' t it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted.





The container health checks is configured in the ' Livenessprobe ' section of your container config. There you can also specify an "Initialdelayseconds" that's a grace period from when the container was started to when heal Th checks is performed, to-enable your container to perform any necessary initialization.





Here's an example config for a pod with an HTTP health check:





Kind:pod
Apiversion:v1beta1
Desiredstate:
Manifest
Version:v1beta1
id:php
Containers
-Name:nginx
Image:dockerfile/nginx
Ports
-CONTAINERPORT:80
# defines the health checking
Livenessprobe:
# Turn on application health checking
Enabled:true
Type:http
# Length of time to wait-a pod to initialize
# after Pod startup, before applying health checking
Initialdelayseconds:30
# an HTTP probe
HttpGet:
Path:/_status/healthz
port:8080



References
Https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/fedora/fedora_ansible_config.md
Https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/walkthrough
https://cloud.google.com/container-engine/docs/services/
Https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md
Https://github.com/rackspace/gophercloud
Http://wiki.mikejung.biz/index.php?title=Kubernetes


Kubernetes Multi-node deployment resolution

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.