First, the Environment preparation
1. Installing the configuration Docker
The v1.11.0 version is recommended for use with Docker v17.03,v1.11,v1.12,v1.13, which may not work properly with the later version of Docker.
#移除以前安装的docker,并安装指定的版本[[email protected] ~]# yum remove -y docker-ce docker-ce-selinux container-selinux[[email protected] ~]# rm -rf /var/lib/docker[[email protected] ~]# yum install -y --setopt=obsoletes=0 docker-ce-17.03.1.ce-1.el7.centos docker-ce-selinux-17.03.1.ce-1.el7.centos[[email protected] ~]# systemctl enable docker && systemctl restart docker
2. Configure the Alibaba Yum Source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
3. Installing software such as Kubeadm
[[email protected] ~]# yum install -y kubelet kubeadm kubectl[[email protected] ~]# systemctl enable kubelet && systemctl start kubelet
4. Configure system-related parameters
#关闭selinux[[email protected] ~]# setenforce 0
#关闭swap[[email protected] ~]# swapoff -a[[email protected] ~]# sed -i ‘s/.*swap.*/#&/‘ /etc/fstab
#关闭防火墙[[email protected] ~]# systemctl stop firewalld
# 配置相关参数cat <<EOF > /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1vm.swappiness=0EOF[[email protected] ~]# sysctl --system
Second, master node configuration
1. Because you can't access Google's image source at home, use the image source you uploaded to Ali
[[email protected] ~]# docker login --username=du11589 registry.cn-shenzhen.aliyuncs.comPassword: Login Succeeded[[email protected] ~]# ./kube.sh
[[email protected] ~]# cat kube.sh #!/bin/bashimages=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0 etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.8 k8s-dns-kube-dns-amd64:1.14.8 k8s-dns-dnsmasq-nanny-amd64:1.14.8 )for imageName in ${images[@]} ; dodocker pull registry.cn-shenzhen.aliyuncs.com/duyj/$imageNamedocker tag registry.cn-shenzhen.aliyuncs.com/duyj/$imageName k8s.gcr.io/$imageNamedocker rmi registry.cn-shenzhen.aliyuncs.com/duyj/$imageNamedonedocker pull quay.io/coreos/flannel:v0.10.0-amd64docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
2. View the downloaded image
[[email protected] ~]# docker image lsrepository TAG image ID CREATED sizek8s.gcr.io/kube-controller-manager-amd64 v1.11.0 55b70b420785 4 weeks ago 155 MBK8S.GCR.IO/KUBE-SCHEDULER-AMD64 v1.11.0 0e4a34a3b0e6 4 weeks ago 56.8 MBK8S.GCR.IO/KUBE-PROXY-AMD64 v1.11.0 1d3d7afd77d1 4 weeks ago 97.8 mbk8s.gcr.i O/KUBE-APISERVER-AMD64 v1.11.0 214c48e87f58 4 weeks ago 187 Mbk8s.gcr.io/coredns 1.1.3 b3b94275d97c 2 months ago 45.6 mbk8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 3 months ago 219 mbk8s.gcr.io/kubernetes-dashboard-amd64 v1.8 .3 0c60bcf89900 5 months ago 102 Mbk8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 9d10ba894459 5 Months ago 42.2 mbk8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 ac4746d72dc4 5 months ago 40.9 Mbk8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 6ceab6c8330d 5 months ago 50.5 Mbquay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 6 months ago 44.6 MBk8s.gcr.io /PAUSE-AMD64 3.1 da86e6ba6ca1 7 months ago 742 Kbk8s.gcr.io/pause 3.1 da86e6ba6ca1 7 months ago 742 KB
3. Perform the master node initialization
[[email protected] ~]# kubeadm init--kubernetes-version=v1.11.0--pod-network-cidr=10.244.0.0/16-- Apiserver-advertise-address=20.0.30.105[init] using Kubernetes version:v1.11.0[preflight] running pre-flight checksI0726 17:41:23.621027 65735 kernel_validator.go:81] Validating kernel versionI0726 17:41:23.621099 65735 kernel_ VALIDATOR.GO:96] Validating kernel config [WARNING Hostname]: Hostname "docker-5" could not being reached [WARNING Host Name]: hostname "DOCKER-5" lookup docker-5 on 8.8.8.8:53:no such host[preflight/images] pulling images required for Setti ng up a Kubernetes cluster[preflight/images] This might take a minute or both, depending on the speed of the your Internet conn Ection[preflight/images] You can also perform this action with beforehand using ' kubeadm config images pull ' [Kubelet] Writin G Kubelet environment file with the flags to file "/var/lib/kubelet/kubeadm-flags.env" [Kubelet] Writing kubelet Configuration To file "/var/lib/kubelet/config.yaml" [preflight] ActIvating the Kubelet service[certificates] Generated CA certificate and key. [Certificates] Generated apiserver certificate and key. [Certificates] Apiserver serving cert is signed for DNS names [docker-5 kubernetes kubernetes.default Kubernetes.default.s VC kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 20.0.30.105][certificates] Generated Apiserver-kubelet-client certificate and key. [Certificates] Generated SA key and public key. [Certificates] Generated FRONT-PROXY-CA certificate and key. [Certificates] Generated front-proxy-client certificate and key. [Certificates] Generated ETCD/CA certificate and key. [Certificates] Generated etcd/server certificate and key. [Certificates] Etcd/server serving cert is signed for DNS names [docker-5 localhost] and IPs [127.0.0.1:: 1][certificates] Generated Etcd/peer certificate and key. [Certificates] Etcd/peer serving cert is signed for DNS names [docker-5 localhost] and IPs [20.0.30.105 127.0.0.1:: 1][cer Tificates] Generated ETCD/HEALTHCHECK-CLIent certificate and key. [Certificates] Generated apiserver-etcd-client certificate and key. [Certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [Kubeconfig] wrote Kubeconfig file to disk: " /etc/kubernetes/admin.conf "[Kubeconfig] wrote Kubeconfig file to disk:"/etc/kubernetes/kubelet.conf "[Kubeconfig] Wrote Kubeconfig file to disk: "/etc/kubernetes/controller-manager.conf" [Kubeconfig] wrote Kubeconfig file to disk: "/etc /kubernetes/scheduler.conf "[Controlplane] wrote Static Pod manifest for component kube-apiserver to"/etc/kubernetes/ Manifests/kube-apiserver.yaml "[Controlplane] wrote Static Pod manifest for component Kube-controller-manager to"/etc/ Kubernetes/manifests/kube-controller-manager.yaml "[Controlplane] wrote Static Pod manifest for component Kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [Etcd] wrote Static Pod manifest for a local ETCD Instan Ce to '/etc/kubernetes/manifests/etcd.yaml ' [init] waiting for the kubelet to boot up the COntrol plane as Static Pods from directory '/etc/kubernetes/manifests ' [init] This might take a minute or longer if the CO Ntrol plane images has to being pulled[apiclient] all control plane components is healthy after 39.001159 Seconds[uploadcon Fig] Storing the configuration used in Configmap "Kubeadm-config" in the "Kube-system" Namespace[kubelet] Creating a confi GMap "kubelet-config-1.11" in namespace Kube-system with the configuration for the kubelets in the Cluster[markmaster] Mar King the node docker-5 as master by adding the label "node-role.kubernetes.io/master=" "[Markmaster] marking the node dock Er-5 as Master by adding the taints [Node-role.kubernetes.io/master:noschedule][patchnode] uploading the CRI Socket inform ation "/var/run/dockershim.sock" to the Node API object "Docker-5" as an Annotation[bootstraptoken] using TOKEN:G80A49.QG Hzuffg3z58ykmv[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get Long Term certificate CRedentials[bootstraptoken] configured RBAC rules to allow the Csrapprover controller automatically approve CSRs from a Nod e Bootstrap Token[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in The Cluster[bootstraptoken] Creating the "Cluster-info" Configmap in the "Kube-public" namespace[addons] applied Essentia L Addon:coredns[addons] Applied Essential Addon:kube-proxyyour Kubernetes Master has initialized successfully! To start using your cluster, you need to run the following as a regular user:mkdir-p $HOME/.kube sudo cp-i/etc/kuber netes/admin.conf $HOME/.kube/config sudo chown $ (id-u): $ (id-g) $HOME/.kube/configyou should now deploy a pod network to The cluster. Run "Kubectl apply-f [Podnetwork].yaml" with one of the options listed At:https://kubernetes.io/docs/concepts/cluster-a Dministration/addons/you can now join all number of machines by running the following on each Nodeas Root:kubeadm join 2 0.0.30.105:6443--token G80A49.QGHZUFFG3Z58YKMV--discovery-token-ca-cert-hash sha256 : 8ae3e31892f930ba48eb33e96a2d86c0daf2a13847f8dc009e25e200a9cee6f6[[email protected] ~]#
4. View the initialization situation
[[email protected] ~]# export KUBECONFIG=/etc/kubernetes/admin.conf [[email protected] ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONdocker-5 NotReady master 35m v1.11.1[[email protected] ~]# kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-78fcdf6894-99kct 0/1 Pending 0 35mkube-system coredns-78fcdf6894-wsf4g 0/1 Pending 0 35mkube-system etcd-docker-5 1/1 Running 0 34mkube-system kube-apiserver-docker-5 1/1 Running 0 35mkube-system kube-controller-manager-docker-5 1/1 Running 0 35mkube-system kube-proxy-ktks6 1/1 Running 0 35mkube-system kube-scheduler-docker-5 1/1 Running 0 35m
5. Configure the Master network
[[email protected] ~]# wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml[[email protected] ~]# kubectl apply -f kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.extensions/kube-flannel-ds created
[[email protected] ~]# kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-78fcdf6894-99kct 1/1 Running 0 41mkube-system coredns-78fcdf6894-wsf4g 1/1 Running 0 41mkube-system etcd-docker-5 1/1 Running 0 40mkube-system kube-apiserver-docker-5 1/1 Running 0 40mkube-system kube-controller-manager-docker-5 1/1 Running 0 40mkube-system kube-flannel-ds-fmd97 1/1 Running 0 37skube-system kube-proxy-ktks6 1/1 Running 0 41mkube-system kube-scheduler-docker-5 1/1 Running 0 40m
Third, add node nodes
1.node node before joining the cluster, you need to complete the first part of the environment preparation
2. Download the image
[[email protected] ~]# docker login --username=du11589 registry.cn-shenzhen.aliyuncs.comPassword: Login Succeeded[[email protected] ~]# ./nodekube.sh
[[email protected] ~]# cat nodekube.sh #!/bin/bashimages=(kube-proxy-amd64:v1.11.0 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 heapster-influxdb-amd64:v1.3.3 heapster-grafana-amd64:v4.4.3 heapster-amd64:v1.4.2 )for imageName in ${images[@]} ; dodocker pull registry.cn-shenzhen.aliyuncs.com/duyj/$imageNamedocker tag registry.cn-shenzhen.aliyuncs.com/duyj/$imageName k8s.gcr.io/$imageNamedocker rmi registry.cn-shenzhen.aliyuncs.com/duyj/$imageNamedonedocker pull quay.io/coreos/flannel:v0.10.0-amd64docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1
3. View the downloaded image
[[email protected] ~]# docker image lsrepository TAG image ID CREATED sizek8s.gcr.io/kube-proxy-amd64 v1.11.0 1d3d7afd77d1 4 weeks ago 97.8 mbk8s.gcr.io/kubernetes-dashboard-amd64 v1.8.3 0c60bcf89900 5 months ago 102 mbquay.i O/coreos/flannel v0.10.0-amd64 f0fad859c909 6 months ago 44.6 Mbk8s.gcr.io/pause-amd6 4 3.1 da86e6ba6ca1 7 months ago 742 Kbk8s.gcr.io/pause 3.1 da86e6ba6ca1 7 months ago 742 Kbk8s.gcr.io/heapster-influxdb-amd64 v1.3.3 577260d221db months ago 12.5 mbk8s.gcr.io/heapster-grafana-amd64 v4.4.3 8cb3de21 9af7 months ago Mbk8s.gcr.io/heapster-amd64 v1.4.2 D4E02F5922CA one M Onths ago 73.4 MB
4. Join a node
[[email protected] ~]# kubeadm join 20.0.30.105:6443--token G80A49.QGHZUFFG3Z58YKMV-- Discovery-token-ca-cert-hash Sha256:8ae3e31892f930ba48eb33e96a2d86c0daf2a13847f8dc009e25e200a9cee6f6[preflight] Running pre-flight checks [WARNING requiredipvskernelmodulesavailable]: The IPVS proxier won't be used, because the Following required kernel modules is not loaded: [IP_VS_RR IP_VS_WRR ip_vs_sh Ip_vs] or no builtin kernel Ipvs support:m ap[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]you can solve this problem with following methods:1 . Run ' modprobe--' to load missing kernel modules;2. Provide the missing builtin kernel Ipvs supportI0726 19:17:28.277627 36641 kernel_validator.go:81] Validating kernel ver sionI0726 19:17:28.277705 36641 kernel_validator.go:96] Validating kernel config[discovery] Trying to connect to API Ser Ver "20.0.30.105:6443" [Discovery] Created cluster-info Discovery Client, requesting info from "https://20.0.30.105:6443 "[Discovery] Requesting info from ' https://20.0.30.105:6443 ' again to validate TLS against ' pinned public Key[discovery ' Cluster in FO signature and contents are valid and TLS certificate validates against pinned Roots, would use API Server "20.0.30.105:6 443 "[Discovery] successfully established connection with API Server" 20.0.30.105:6443 "[Kubelet] Downloading Configuration for the kubelet from the ' kubelet-config-1.11 ' Configmap in the Kube-system Namespace[kubelet] Writing Kubel ET configuration to file '/var/lib/kubelet/config.yaml ' [kubelet] Writing kubelet environment file with the flags to file "/var /lib/kubelet/kubeadm-flags.env "[preflight] activating the Kubelet Service[tlsbootstrap] Waiting for the Kubelet to Perform the TLS Bootstrap ... [Patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "Docker-2" as an annotationthis nod E has joined the cluster:* Certificate signing request is sent to master and a response were received.* the kubelet was I NformEd of the new secure connection details. Run ' kubectl get nodes ' on the master to see this node join the cluster.
5. View node join status on master
[[email protected] ~]# kubectl get nodesname STATUS ROLES age VERSIONdocker-2 Ready <none > 3m v1.11.1docker-5 Ready Master 4h v1.11.1[[email protected] ~]# kubectl get pods-n k Ube-system-o Widename Ready STATUS restarts IP NODEcoredns-78 Fcdf6894-99kct 1/1 Running 0 4h 10.244.0.2 docker-5coredns-78fcdf6894-wsf4g 1/1 Running 0 4h 10.244.0.3 docker-5etcd-docker-5 1/1 Running 0 4h 20.0.30.105 docker-5kube-apiserver-docker-5 1/1 Running 0 4h 20.0.3 0.105 docker-5kube-controller-manager-docker-5 1/1 Running 0 4h 20.0.30.105 Docker-5kube-fla NNEL-DS-C7RB4 1/1 Running 0 7m 20.0.30.102 docker-2kube-flannel-ds-fmd97 1/1 Running 0 3h 20.0.30.105 Docker-5kube-proxy-7tmtg 1/1 Running 0 7m 20.0 .30.102 docker-2kube-proxy-ktks6 1/1 Running 0 4h 20.0.30.105 docker-5kube-s Cheduler-docker-5 1/1 Running 0 4h 20.0.30.105 docker-5
Kubeadm installing kubernetes clusters