Kubernetes 1.9 Installation Deployment

Source: Internet
Author: User
Tags vars ansible template grafana haproxy influxdb etcd k8s kubernetes docker

Reference Address: Https://github.com/gjmzj/kubeasz

Introduction

Provides tools for quickly deploying highly available k8s clusters, deploying in binary mode and automating with Ansible-playbook, providing a one-click installation script or stepping through the installation of individual components while explaining the main parameter configurations and considerations for each step.

Characteristics

Cluster Features: TLS bidirectional authentication, RBAC authorization, multi-master high availability, and network Policy support.

Threshold

Need to master the basics of Kubernetes Docker Linux shell Knowledge, about Ansible recommended to read Ansible super fast entry basic enough.

Environment Preparation Node Information
node information/host name Intranet IP Installing the Software
master1 192.168.16.8 ansible+calico+api Server+scheduler+controller Manager+etcd
master2 192.168.16.9 calico+api server+scheduler+c Ontroller manager+etcd
master3 192.168.16.15 calico+api server+s Cheduler+controller manager+etcd
node1 192.168.16.10 Calico+kub Elet+kube-proxy
node2 192.168.16.11 calico+kubelet+kube-proxy
node3 192.168.16.12 calico+kubelet+kube-proxy
Load Balancer Intranet 192.168.16.16
Harbor host 192. 168.16.3 Harbor
    • In order to conserve resources, master also acts as a ETCD deployment machine in this deployment.
    • A vpc creates a lb in the LB-IP:port form of an kube-apiserver address that forms the internal API interface of a multi-master cluster.
    • Habor hosts are harbor deployed before using them.
    • The creation of each node cloud host is not to be summarized.
COMPUTE node Specifications

Test environment, 6 nodes will be 2C 4G 40G default disk.

Architecture diagram

Installing load Balancing Information

Public clouds already have load balancing services to replace haproxy+keepalived solutions. The following are load balancing and related information:

Virtual Server Group information:

Monitoring rule information:

Master1 node operation

With ansible installation, you can operate on any node in the 6 nodes, in this article, in the Master1 node.

Installation dependencies and Ansible

The Ansbile version is recommended for more than 2.4, otherwise the module will be reported as unrecognized.

apt-get update && apt-get upgrade -yapt-get install python2.7 git python-pippip install pip --upgradepip install ansible
Installing the Python encryption module
pip install cryptography --upgrade#重新安装pyopenssl,否则会报错pip uninstall pyopensslpip install pyopenssl
Configuring the Ansible SSH key
ssh-keygen -t rsa -b 2048  #3个回车ssh-copy-id $IP  #$IP为本虚机地址,按照提示输入yes 和root密码,将密钥发送至各节点(包括本机内网IP)
Operating Base software installation for each node
apt-get update && apt-get upgrade -y && apt-get dist-upgrade -yapt-get purge ufw lxd lxd-client lxcfs lxc-common -y
DNS settings

In order to prevent /etc/resolv.conf DNS from being overridden, write the DNS address /etc/resolvconf/resolv.conf.d/base :

cat << EOF >> /etc/resolvconf/resolv.conf.d/basenameserver 103.224.222.222nameserver 103.224.222.223nameserver 8.8.8.8EOF
Modify hostname
hostname分为三种:pretty hostname: 也就是比较好看的hostname,用来取悦自己的;),如设置为“Zhjwpku’s Laptop”static hostname: 用来在启动的时候初始化内核的hostname,保存在/etc/hostname中transient hostname: 瞬态的主机名,是系统运行时临时分配的主机名,例如使用hostname node1 设置的主机名node1就为transient hostname

The above three host names can be set using Hostnamectl, and static and transient are set by default if not specified.

Set the corresponding hostname on each node:

#$HostNames是各节点对应的主机名称hostnamectl set-hostname $HostName  
Modify the/etc/hosts file

All nodes Add the node information to /etc/hosts :

cat <<EOF >> /etc/hosts192.168.16.8 master1192.168.16.9 master2192.168.16.15 master3192.168.16.10 node1192.168.16.11 node2192.168.16.12 node3192.168.16.3 harbor.jdpoc.comEOF
Configure Ansible

The install deployment operation is performed on the Master1 node, using the Ansible script one click.

Download the ansible template as well as k8s 1.9.62 binary files and unzip:

cd ~wget http://chengchen.oss.cn-north-1.jcloudcs.com/ansible.tar.gzwget http://chengchen.oss.cn-north-1.jcloudcs.com/k8s.196.tar.gztar zxf k8s.196.tar.gztar zxf ansible.tar.gz#将bin目录中的文件移动至ansible/bin目录mv bin/* ansible/bin/#移动ansible目录至/etcmv ansible /etc/

Edit the ansible configuration file.

cd /etc/ansiblecp example/hosts.m-masters.example hostsvi hosts

According to the actual situation, this deployment is configured as follows:

# Deployment node: Run this copy of the Ansible script node # actual modification [deploy]192.168.16.8# ETCD cluster Please provide the following node_name, node_ip variable, note etcd cluster must be 1,3,5,7 ... odd number of nodes # Actual changes [etcd]192.168.16.8 node_name=etcd1 node_ip= "192.168.16.8" 192.168.16.9 node_name=etcd2 node_ip= "192.168.16.9" 192.168.16.15 node_name=etcd3 node_ip= "192.168.16.15" [kube-master]# actual changes 192.168.16.8 node_ip= "192.168.16.8" 192.168.16.9 node_ip= "192.168.16.9" 192.168.16.15 node_ip= "192.168.16.15" ################### #在公有云环境中 with load balancing, no need to deploy ##################### Load balancer At least two nodes, install haproxy+keepalived#[lb] #192.168.1.1 lb_if= "eth0" Lb_role=backup # Note Set lb based on actual use of network card _if variable #192.168.1.2 lb_if= "eth0" lb_role=master#[lb:vars] #master1 = "192.168.1.1:6443" # set #master2= based on the actual number of master nodes 192.168.1.2:6443 "# Need to sync settings roles/lb/templates/haproxy.cfg.j2 #master3 =" 192.168.1.x:6443 "#ROUTER_ID = 57 # Fetch The value is between 0-255, distinguish multiple instance VRRP multicast, same network segment cannot repeat #master_port= "8443" # Set Api-server VIP address service port ############################### ################################################### actual changes [kube-node]192.168.16.10 node_ip= "192.168.16.10" 192.168.16.11 node_ip= "192.168.16.11" 192.168.16.12 node_ip= "192.168.16.12" # if Harbor is enabled, Please configure the following harbor related parameters, if there is already harbor, comments can be [Harbor] #192.168.1.8 node_ip= "192.168.1.8" # reserved group, subsequent add master node use, no reserved comments can be [new-master ] #192.168.1.5 node_ip= "192.168.1.5" # reserved group, subsequent add node nodes use, no reserved annotations can be [New-node] #192.168.1.xx node_ip= "192.168.1.xx" [All: vars]#---------Cluster main parameters---------------#集群部署模式: ALLinONE, Single-master, multi-master# according to the actual situation to choose: single-machine deployment, one-master, Multi-masterdeploy_mode=multi-master# Cluster master IP is the LB node VIP address, and based on the LB node's master_port composition kube_apiserver# based on the load balancer we created earlier, Fill in the intranet IP, the port can be customized master_ip= "192.168.16.16" kube_apiserver= "https://192.168.16.16:8443" #TLS Bootstrapping used Token, Using Head-c 16/dev/urandom | Od-an-t x | Tr-d ' Build # Executes the above command in the system, and the resulting result replaces the following variable bootstrap_token= "A7383DE6FDF9A8CB661757C7B763FEB6" # Cluster network plug-in, currently supports Calico and flannel# This deployment uses calicocluster_network= "Calico" # partial Calico related configuration, more full configuration can go roles/calico/templates/calico.yaml.j2 custom # settings Calico_ Ipv4pool_ipip= "Off", can improve network performance, the conditions are limited see 05. Install the Calico network components. Mdcalico_ipv4pooL_ipip= "Always" # set Calico-node used by the host IP,BGP neighbor established by this address, you can manually specify the port "Interface=eth0" or use the following Autodiscover # Public cloud default can Ip_autodetection_ Method= "can-reach=223.5.5.5" # Partial flannel configuration, see roles/flannel/templates/kube-flannel.yaml.j2flannel_backend= "Vxlan" # service CIDR, Deployment Road is unreachable, deployed after cluster using ip:port up to Service_cidr= "10.68.0.0/16" # POD segment (Cluster CIDR), Deployment forward unreachable, * * After Deployment * * Routing up to Cluster_cidr= "172.21.0.0/16" # Service port range (Nodeport range) node_port_range= "20000-40000" # Kubernetes service IP (pre-allocated, typically SERVICE_CIDR first IP) cluster_kubernetes_svc_ip= "10.69.0.1" # Cluster DNS service IP (pre-allocated from SERVICE_CIDR) cluster_dns_svc_ip= " 10.69.0.2 "# cluster DNS domain name cluster_dns_domain=" cluster.local. " # ETCD IP and ports for inter-cluster communication, * * Based on actual ETCD cluster member settings **etcd_nodes= "etcd1=https://192.168.16.8:2380,etcd2=https://192.168.16.9:2380 , etcd3=https://192.168.16.15:2380 "# ETCD Cluster service Address list, * * Based on actual ETCD cluster member settings **etcd_endpoints=" https://192.168.16.8:2379, https://192.168.16.9:2379,https://192.168.16.15:2379 "# The user name and password used by the cluster BASIC auth basic_auth_user=" admin "basic_auth_ Pass= "jdtest1234" #---------Additional parameters--------------------#默认二进制文件目录bin_dir = "/root/local/bin" #证书目录ca_dir = "/etc/kubernetes/ssl" #部署目录, that is, ansible working directory, We recommend that you do not modify the base_dir= "/etc/ansible" #私有仓库 HARBOR server (domain name or IP), if you have HARBOR, comments can be #harbor_ip= "192.168.16.3" #HARBOR_DOMAIN = " Harbor.jdpoc.com "
Quick Install Kubernetes 1.9

The following two types of installations select a mode installation.

Step Installation

Execute the 01-06 yaml file sequentially:

cd /etc/ansibleansible-playbook 01.prepare.ymlansible-playbook 02.etcd.yml ansible-playbook 03.docker.ymlansible-playbook 04.kube-master.ymlansible-playbook 05.kube-node.ymlansible-playbook 06.network.yml

If an error occurs, verify /etc/ansible/hosts the configuration in the file based on the actual error information.

One-Step installation
cd /etc/ansibleansible-playbook 90.setup.yml
Clean

If you deploy the error, you can choose to clean the installed program:

cd /etc/ansibleansible-playbook 99.clean.yml
Deploying cluster DNS

DNS is k8s cluster first need to deploy, the other pods in the cluster use it to provide domain name resolution service, mainly can resolve the Cluster service name SVC and Pod hostname , currently k8s v1.9+ version can have two choices: kube-dns and coredns , you can choose one of the deployment installation. Used in this deployment kubedns .

kubectl create -f /etc/ansible/manifests/kubedns
    • Cluster pod inherits the DNS resolution of node by default, modifies the Kubelet service startup parameter --resolv-conf="" , can change this feature, see Kubelet startup parameter
    • If you use the Calico Network component to install the DNS components directly, the following bug may occur because the Calico allocates pod addresses from the first address of the network segment (network address), the temporary workaround is to manually delete the pod, re-create and get the subsequent IP address
# BUG出现现象$ kubectl get pod --all-namespaces -o wideNAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE       IP              NODEdefault       busy-5cc98488d4-s894w                      1/1       Running            0          28m       172.20.24.193   192.168.97.24kube-system   calico-kube-controllers-6597d9c664-nq9hn   1/1       Running            0          1h        192.168.97.24   192.168.97.24kube-system   calico-node-f8gnf                          2/2       Running            0          1h        192.168.97.24   192.168.97.24kube-system   kube-dns-69bf9d5cc9-c68mw                  0/3       CrashLoopBackOff   27         31m       172.20.24.192   192.168.97.24# 解决办法,删除pod,自动重建$ kubectl delete pod -n kube-system kube-dns-69bf9d5cc9-c68mw
Verifying the DNS Service

Create a new test Nginx service

kubectl run nginx --image=nginx --expose --port=80

Confirm Nginx Service:

[email protected]:/etc/ansible/manifests/kubedns# kubectl get podNAME                     READY     STATUS    RESTARTS   AGEnginx-7587c6fdb6-vjnss   1/1       Running   0          30m

Test pod BusyBox:

[email protected]:/etc/ansible/manifests/kubedns# kubectl run busybox --rm -it --image=busybox /bin/shIf you don‘t see a command prompt, try pressing enter./ # cat /etc/resolv.conf nameserver 10.69.0.2search default.svc.cluster.local. svc.cluster.local. cluster.local.options ndots:5/ # nslookup nginxServer:    10.69.0.2Address 1: 10.69.0.2 kube-dns.kube-system.svc.cluster.localName:      nginxAddress 1: 10.69.152.34 nginx.default.svc.cluster.local/ # nslookup www.baidu.comServer:    10.69.0.2Address 1: 10.69.0.2 kube-dns.kube-system.svc.cluster.localName:      www.baidu.comAddress 1: 220.181.112.244Address 2: 220.181.111.188

If you can resolve the success of the Nginx and the external domain name, the DNS deployment is successful, if unable to resolve, stating that Kube-dns has a problem, please pass kubectl get pod --all-namespaces -o wide , get to Kube-dns node, and by docker logs viewing the detailed log, A large probability is due to the bug mentioned above.

Deploying Dashboard

Deployment:

kubectl create -f /etc/ansible/manifests/dashboard/kubernetes-dashboard.yaml#可选操作:部署基本密码认证配置,密码文件位于 /etc/kubernetes/ssl/basic-auth.csvkubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=adminkubectl create -f /etc/ansible/manifests/dashboard/admin-user-sa-rbac.yamlkubectl create -f /etc/ansible/manifests/dashboard/ui-admin-rbac.yaml

Verify:

# 查看pod 运行状态kubectl get pod -n kube-system | grep dashboardkubernetes-dashboard-7c74685c48-9qdpn   1/1       Running   0          22s# 查看dashboard servicekubectl get svc -n kube-system|grep dashboardkubernetes-dashboard   NodePort    10.68.219.38   <none>        443:24108/TCP                   53s# 查看集群服务,获取访问地址[email protected]:~# kubectl cluster-infoKubernetes master is running at https://192.168.16.16:8443# 此处就是我们要获取到的dashboard的URL地址。kubernetes-dashboard is running at https://192.168.16.16:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxyTo further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.# 查看pod 运行日志,关注有没有错误kubectl logs kubernetes-dashboard-7c74685c48-9qdpn -n kube-system

When the page is accessed, the IP address in the access URL is replaced with a load-balanced extranet IP address, and the following information is displayed when the page is opened:

Get access token:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk ‘{print $1}‘)

Open the page appears dashboard new version comes with the landing page, we choose "token" way to login, paste the obtained token, click to login:

Deploying Heapster

Heapster the process of monitoring the entire cluster resource: First kubelet the built-in Cadvisor collects the container resource occupancy of this node, and then heapster the resource consumption of nodes and containers from the Kubelet API, and finally Heapster Persistent data is stored in influxdb (and can be other storage backend, Google Cloud monitoring, etc.).

Grafana Displays the monitoring information by configuring the data source to point to the above influxdb.

Deployment

The deployment is simple and executes the following commands:

kubectl create -f /etc/ansible/manifests/heapster/
Verify
[email protected]:~# kubectl get pods -n kube-system |grep -E "heapster|monitoring"heapster-7f8bf9bc46-w6xbr                  1/1       Running   0          2dmonitoring-grafana-59c998c7fc-gks5j        1/1       Running   0          2dmonitoring-influxdb-565ff5f9b6-xth2x       1/1       Running   0          2d

To view logs:

kubectl logs heapster-7f8bf9bc46-w6xbr -n kube-system kubectl logs monitoring-grafana-59c998c7fc-gks5j -n kube-systemkubectl logs monitoring-influxdb-565ff5f9b6-xth2x -n kube-system
Visit Grafana
#获取grafana的URL连接[email protected]:~# kubectl cluster-info | grep grafanamonitoring-grafana is running at https://192.168.16.16:8443/api/v1/namespaces/kube-system/services/monitoring-grafan/proxy

To open a connection:

You can see the CPU, memory, load and other utilization graphs of each Nodes, Pods, if the utilization graph is not visible on dashboard, restart the dashboard pod using the following command:

    • First deletekubectl scale deploy kubernetes-dashboard --replicas=0 -n kube-system
    • And then create a newkubectl scale deploy kubernetes-dashboard --replicas=1 -n kube-system

After deploying Heapster, use the KUBECTL Client tool to view resource usage directly

# 查看node 节点资源使用情况$ kubectl top node  # 查看各pod 的资源使用情况$ kubectl top pod --all-namespaces

Kubernetes 1.9 Installation Deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.