Package Address: http://sealyun.com/pro/products/
- Master on: CD shell && sh init.sh && sh master.sh
- Node: CD Shell && sh init.sh
- The join command to execute the master output on node (command forgot to use this view, Kubeadm token Create-print–join-command)
This package updates the content, adds Crictl, otherwise cannot use the Kubeadm to install the cluster
New Ipvs mode, open way:
Kubernetes Enable Ipvs
Make sure the kernel has the Ipvs module turned on
[[email protected] ~]# lsmod|grep ip_vsip_vs_sh 12688 0ip_vs_wrr 12697 0ip_vs_rr 12600 16ip_vs 141092 23 ip_vs_rr,ip_vs_sh,xt_ipvs,ip_vs_wrrnf_conntrack 133387 9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6libcrc32c 12644 3 ip_vs,nf_nat,nf_conntrack
does not open load mode:
modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4
1.10 or above, using KUBEADM installation, directly modify the Kube-proxy Configmap can
kubectl edit configmap kube-proxy -n kube-system
ipvs: minSyncPeriod: 0s scheduler: "" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" # 加上这个 nodePortAddresses: null
See Pod with the following information indicating success
[[email protected] ~]# kubectl logs kube-proxy-72lg9 -n kube-systemI0530 03:38:11.455609 1 feature_gate.go:226] feature gates: &{{} map[]}I0530 03:38:11.490470 1 server_others.go:183] Using ipvs Proxier.W0530 03:38:11.503868 1 proxier.go:304] IPVS scheduler not specified, use rr by defaultI0530 03:38:11.504109 1 server_others.go:209] Tearing down inactive rules.I0530 03:38:11.552587 1 server.go:444] Version: v1.10.3
Installing the Ipvsadm tool
yum install -y ipvsadm
Check the service Ipvs configuration:
[[email protected] ~]# ipvsadm -lnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 172.31.244.239:32000 rr -> 192.168.77.9:8443 Masq 1 0 0TCP 172.31.244.239:32001 rr -> 192.168.77.8:3000 Masq 1 0 0TCP 10.96.0.1:443 rr persistent 10800 -> 172.31.244.239:6443 Masq 1 0 0TCP 10.96.0.10:53 rr -> 192.168.77.7:53 Masq 1 0 0 -> 192.168.77.10:53 Masq 1 0 0TCP 10.96.82.0:80 rr -> 192.168.77.8:3000 Masq 1 0 0TCP 10.96.152.25:8086 rr -> 192.168.77.12:8086 Masq 1 0 0TCP 10.96.232.136:6666 rr
You can see that our dashboard DNS has been configured and can be verified by:
[[email protected] ~]# wget https://172.31.244.239:32000 --no-check-certificate--2018-05-30 16:17:15-- https://172.31.244.239:32000/正在连接 172.31.244.239:32000... 已连接。警告: 无法验证 172.31.244.239 的由 “/CN=.” 颁发的证书: 出现了自己签名的证书。 警告: 证书通用名 “.” 与所要求的主机名 “172.31.244.239” 不符。已发出 HTTP 请求,正在等待回应... 200 OK长度:990 [text/html]正在保存至: “index.html”100%[=======================================================================================================================================================>] 990 --.-K/s 用时 0s2018-05-30 16:17:15 (16.3 MB/s) - 已保存 “index.html” [990/990])
Yes, it's all OK.
It is highly recommended that you use Ipvs mode, iptables out the problem is not good debugging, and the rule of a lot of performance drops, we even appear the rule is lost, Ipvs stable a lot
kubernetes1.11.0 installation tutorial, open Ipvs era