Please be sure to keep the environment consistent
To download the required system packages during installation, be sure to connect all the nodes to the Internet .
Cluster node information for this installation
Lab environment: VMware's virtual machines
IP Address |
Host name |
CPU |
Memory |
192.168.77.133 |
K8s-m1 |
6 cores |
6G |
192.168.77.134 |
k8s-m2 |
6 cores |
6G |
192.168.77.135 |
K8s-m3 |
6 cores |
6G |
192.168.77.136 |
K8s-n1 |
6 cores |
6G |
192.168.77.137 |
K8s-n2 |
6 cores |
6G |
192.168.77.138 |
K8s-n3 |
6 cores |
6G |
In addition, a set of VIPs is provided by all master nodes 192.168.77.140
.
The cluster topology diagram for this installation image.png the role that was used this time
- Ansible Role System Environment "Epel Source Settings"
- "Hostnames" of Ansible Role system environment
- "Docker" of the Ansible Role container
- "Kubernetes" of Ansible Role container
Ansible role How to use please read the following article
- Ansible Role "How to use?" 】
Cluster installation method
Install kubernetes ha high availability cluster in static pod mode.
Ansible Managing node Operations
OS:CentOS Linux release 7.4.1708 (Core)
Ansible2.5.3
Installing Ansible
# yum -y install ansible# ansible --versionansible 2.5.3 config file = /etc/ansible/ansible.cfg configured module search path = [u‘/root/.ansible/plugins/modules‘, u‘/usr/share/ansible/plugins/modules‘] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
Configure Ansible
# sed -i ‘s|#host_key_checking|host_key_checking|g‘ /etc/ansible/ansible.cfg
Download role
# yum -y install git# git clone https://github.com/kuailemy123/Ansible-roles.git /etc/ansible/roles正克隆到 ‘/etc/ansible/roles‘...remote: Counting objects: 1767, done.remote: Compressing objects: 100% (20/20), done.remote: Total 1767 (delta 5), reused 24 (delta 4), pack-reused 1738接收对象中: 100% (1767/1767), 427.96 KiB | 277.00 KiB/s, done.处理 delta 中: 100% (639/639), done.
Download Kubernetes-files.zip file
This is to adapt to the national conditions, export the required Google Docker image, convenient for everyone to use.
File download Link: https://pan.baidu.com/s/1BNMJLEVzCE8pvegtT7xjyQ
Password:qm4k
# yum -y install unzip# unzip kubernetes-files.zip -d /etc/ansible/roles/kubernetes/files/
Configuring host Information
# cat /etc/ansible/hosts[k8s-master]192.168.77.133192.168.77.134192.168.77.135[k8s-node]192.168.77.136192.168.77.137192.168.77.138[k8s-cluster:children]k8s-masterk8s-node[k8s-cluster:vars]ansible_ssh_pass=123456
k8s-master
The group is all master node hosts. k8s-node
The group is the host for all node nodes. k8s-cluster
k8s-master
k8s-node
All hosts that contain and groups.
Note that the host name is in lowercase letters, and there is a problem with the host not found in uppercase letters.
Configure Playbook
# CAT/ETC/ANSIBLE/K8S.YML---# Initialize cluster-hosts:k8s-cluster serial:"100%" Any_errors_fatal:True VARs:-ipnames:' 192.168.77.133 ':' K8s-m1 '' 192.168.77.134 ':' K8s-m2 '' 192.168.77.135 ':' K8s-m3 '' 192.168.77.136 ':' K8s-n1 '' 192.168.77.137 ': k8s-n2 ' ' 192.168.77.138 ': k8s-n3 ' Roles:-Hostnames-repo-epel-docker# Install master node-hosts:k8s-master any_errors_fatal:
true VARs:-kubernetes_master:
TRUE-KUBERNETES_APISERVER_VIP: 192.168.77.140 roles:-Kubernetes# Install node node-hosts:k8s-node any_errors_fatal: true VARs:-kubernetes_node: TRUE-KUBERNETES_APISERVER_VIP: 192.168.77.140 roles:-Kubernetes # installation Addons application-Hosts:k8s-master any_errors_fatal: true VARs:-kubernetes_addons: true-kubernetes_ingress_controller : nginx-kubernetes_apiserver_vip:192.168.77.140 roles:-kubernetes
Kubernetes_ingress_controller can also choosetraefik
Executive Playbook
# ansible-playbook /etc/ansible/k8s.yml......real 26m44.153suser 1m53.698ssys 0m55.509s
Asciicast
Verifying the cluster version
# Kubectl Versionclient version:version. Info{major:"1", Minor:"Ten", Gitversion:"v1.10.3", Gitcommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", Gittreestate:"clean", Builddate: "2018-05-21t09:17:39z", Goversion:"go1.9.3", Compiler:"GC", Platform:"Linux/amd64"}server Version: Version. Info{major:"1", Minor: "ten", Gitversion:"v1.10.3", Gitcommit:" 2bba0127d85d5a46ab4b778548be28623b32d0b0 ", Gittreestate:" clean ", Builddate:" 2018-05-21t09:05:37z ", Goversion:"go1.9.3", Compiler:"GC", Platform:"Linux/amd64"}
Verifying cluster status
‘monitoring|heapster|influxdb‘kubectl -n ingress-nginx get podskubectl -n kube-system get po -l app=helmkubectl -n kube-system logs -f kube-scheduler-k8s-m2helm version
The result is not written here.
View Addons Access Information
On the first master server
Kubectl Cluster-infokubernetes Master is running at https://192.168.77.140:6443elasticsearch is running at Https:// 192.168.77.140:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxyheapster is running at HTTPS : //192.168.77.140:6443/api/v1/namespaces/kube-system/services/heapster/proxykibana is running at Https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/ Kibana-logging/proxykube-dns is running at Https://192.168.77.140:6443/api/v1/ Namespaces/kube-system/services/kube-dns:dns/proxymonitoring-grafana is running at https: //192.168.77.140:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/ Proxymonitoring-influxdb is running at Https://192.168.77.140:6443/api/v1/namespaces/ Kube-system/services/monitoring-influxdb:http/proxy
# cat ~/k8s_addons_access
After the cluster deployment is complete, it is recommended that all nodes of the cluster be restarted.
Lework
Links: https://www.jianshu.com/p/265cfb0811b2
Source: Pinterest
The copyright of the book is owned by the author, and any form of reprint should be contacted by the author for authorization and attribution.
Use Ansible to do kubernetes 1.10.3 Cluster High-availability one-click Deployment