Kubernetes node nodes contain the following components:
- Kublete
- Kube-proxy
- Docker-ce
- Flanneld
Installation configuration Docker-ceuninstall old versions
yum remove docker docker-common docker-selinux docker-engine -yansible k8s-node -a 'yum remove docker docker-common docker-selinux docker-engine -y'
Install Docker CE
# install required packagesyum install -y yum-utils device-mapper-persistent-data lvm2ansible k8s-node -a 'yum install -y yum-utils device-mapper-persistent-data lvm2'# Use the following command to set up the stable repositoryyum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repoansible k8s-node -a 'yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo'# changed mirror to aliyunsed -i '[email protected]://download.docker.com/@https://mirrors.aliyun.com/docker-ce/@g' /etc/yum.repos.d/docker-ce.repoansible k8s-node -a 'sed -i '[email protected]://download.docker.com/@https://mirrors.aliyun.com/docker-ce/@g' /etc/yum.repos.d/docker-ce.repo'# install docker-ceyum install docker-ce -yansible k8s-node -a 'yum install docker-ce -y'
Docker some custom configurations
mkdir -p /etc/dockertee /etc/docker/daemon.json <<-'EOF'{ "registry-mirrors": ["http://1bdb58cb.m.daocloud.io"], "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ]}EOF# 批量配置ansible k8s-node -a 'mkdir -p /etc/docker'tee daemon.json <<-'EOF'{ "registry-mirrors": ["http://1bdb58cb.m.daocloud.io"], "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ]}EOFansible k8s-node -m copy -a 'src=/root/daemon.json dest=/etc/docker/daemon.json'
Start the Docker service
ansible k8s-node -m systemd -a 'daemon-reload=yes enabled=yes name=docker state=started'
Deploying Kubelte
kubelet-Official documents
Kubelet bootstapping Kubeconfig
RBAC authorization
Kubelet sends a TLS bootstrapping request to Kube-apiserver at startup, the Kubelet-bootstrap user in the bootstrap token file needs to be assigned to the system first: Node-bootstrapper role, then Kubelet has permission to create the authentication request (certificatesigningrequests).
The following two paragraphs can be executed anywhere.
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
--user=kubelet-bootstrap is the user name specified in the file/etc/kubernetes/token.csv and is also written to the file/etc/kubernetes/bootstrap.kubeconfig
There is also a need to create an RBAC authorization rule for node requests:
kubectl create clusterrolebinding kubelet-nodes --clusterrole=system:node --group=system:nodes
Distributing Kubelet binary files
ansible k8s-node -m copy -a 'src=/usr/local/src/kubernetes/server/bin/kubelet dest=/usr/local/kubernetes/bin/kubelet mode=0755'
Create a kubelet systemd unit file
Create a working directory
mkdir /var/lib/kubeletmkdir /var/log/kubernetes/kubelet -pansible k8s-node -m file -a 'path=/var/lib/kubelet state=directory'ansible k8s-node -m file -a 'path=/var/log/kubernetes/kubelet state=directory'
Installing Conntrack
ansible k8s-node -a 'yum install conntrack -y'
Hosts
ansible k8s -m copy -a 'src=/etc/hosts dest=/etc/hosts'
SYSTEMD Unit File
If you start Kubelet on master, change the? Node-role.kubernetes.io/k8s-node=true to? node-role.kubernetes.io/k8s-master=true
Cat >/root/k8s-node/systemd/kubelet.service <<eof[unit]description=kubernetes KubeletDocumentation=https ://github.com/googlecloudplatform/kubernetesafter=docker.servicerequires=docker.service[service] Workingdirectory=/var/lib/kubeletexecstart=/usr/local/kubernetes/bin/kubelet \--address=192.168.16.238 \-- hostname-override=k8s-n1-16-238 \--node-labels=node-role.kubernetes.io/k8s-node=true \--bootstrap-kubeconfig=/ Etc/kubernetes/bootstrap.kubeconfig \--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \--cert-dir=/etc/kubernetes /SSL \--cluster-dns=10.254.0.2 \--cluster-domain=dns.kubernetes \--hairpin-mode promiscuous-bridge \--feature-gate s=rotatekubeletclientcertificate=true,rotatekubeletservercertificate=true \--fail-swap-on=false \--cgroup-driver =CGROUPFS \--allow-privileged=true \--pod-infra-container-image=clouding/pause-amd64:3.0 \--serialize-image-pulls =false \--logtostderr=false \--log-dir=/var/log/kubernetes/kubelet/\--V=2RESTART=ON-FAIlurerestartsec=5[install]wantedby=multi-user.targeteof
- -address cannot be set to 127.0.0.1, otherwise subsequent Pods will fail to access Kubelet API interface because Pods access 127.0.0.1 points to itself instead of Kubelet
- If the--hostname-override option is set, Kube-proxy also needs to set this option, otherwise it will not be possible to find Node
- -bootstrap-kubeconfig points to the bootstrap Kubeconfig file, Kubelet sends a TLS kube-apiserver request to Bootstrapping using the user name and token in the file
- After the administrator has passed the CSR request, Kubelet automatically creates the certificate and private key file (KUBELET-CLIENT.CRT and Kubelet-client.key) in the--cert-dir directory, and then writes the--kubeconfig file (created automatically- Kubeconfig the specified file)
- We recommend that you specify the Kube-apiserver address in the--kubeconfig configuration file, and if you do not specify the--api-servers option, you must specify the--require-kubeconfig option to
(在1.10+已废弃此选项)
read from the configuration file Kue-apiserver address, otherwise Kubelet will not find kube-apiserver after startup (no API Server found in the log), Kubectl get nodes will not return the corresponding Node information
- --CLUSTER-DNS Specifies the service IP of the Kubedns (which can be assigned first, which is specified when the Kubedns service is created), and--cluster-domain specifies the domain name suffix, which takes effect when both parameters are specified
Distribute the files and modify them individually ip,hostname
ansible k8s-node -m copy -a 'src=/root/k8s-node/systemd/kubelet.service dest=/usr/lib/systemd/system/kubelet.service'
Start the Kubelet service
ansible k8s-node -m systemd -a 'daemon-reload=yes enabled=yes name=kubelet state=started'
TLS certificate request via Kubelet
Kubelet sends a certificate signing request to kube-apiserver on first boot and must pass the Kubernetes system to join the Node to the cluster.
View an unauthorized CSR request
> kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr--vERPmYzSaAZqezwWDKoeyyXjK6KvVHAf5e1SQdHPZo 42s kubelet-bootstrap Pendingnode-csr-1nFaIXpMrQ8TS_jAZFrCz86-lRsiYYWVbvynKsq6ebg 10m kubelet-bootstrap Pendingnode-csr-6clvNX325wgtNd5UPjq8yMAImKp4Qa8XeSypVRK2bqU 40s kubelet-bootstrap Pendingnode-csr-Ff-BmDSdgIF0Riyk0krAT0Bll_u5P4TLNbRU7HZ3T3M 41s kubelet-bootstrap Pendingnode-csr-WWh63mfVRUOflQAnGPIfnTFro2hkswOL3RGy9P9vaVU 1m kubelet-bootstrap Pendingnode-csr-ZAKQ_kY84ORptLMMIJPHu12BraxOLBMFJ33wj_mLM9Q 10m kubelet-bootstrap Pendingnode-csr-vKRJanqdwG9TPXtY1x5e6KP0DJ5XvCWbr7e1tQb0-10 41s kubelet-bootstrap Pending
Request through CSR:
kubectl certificate approve node-csr--vERPmYzSaAZqezwWDKoeyyXjK6KvVHAf5e1SQdHPZocertificatesigningrequest.certificates.k8s.io "node-csr-XeGvv-LBiJ_Q-WXtSCQV3nTIMP6B_L6o69EOIH2utY0" approved
Can be done with a command: Kubectl get CSR | grep Pending | awk ' {print '} ' | Xargs KUBECTL Certificate Approve
Clear all certificates: Kubectl get CSR | grep Approved | awk ' {print '} ' |xargs kubectl delete CSR
The Kubelet kubeconfig file and the public private key are generated automatically:
ls /etc/kubernetes/kubelet.kubeconfigls /etc/kubernetes/sslca-key.pem ca.pem kubelet-client.crt kubelet-client.key kubelet.crt kubelet.key
To view node nodes:
> kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-n1-16-238 Ready k8s-node 12m v1.10.3k8s-n2-16-239 Ready k8s-node 7m v1.10.3k8s-n3-16-240 Ready k8s-node 12m v1.10.3k8s-n4-16-241 Ready k8s-node 12m v1.10.3k8s-n5-16-242 Ready k8s-node 12m v1.10.3k8s-n6-16-243 Ready k8s-node 7m v1.10.3k8s-n7-16-244 Ready k8s-node 7m v1.10.3
Deploying Kube-proxy+ipvs Installation Related Programs
yum install conntrack-tools ipvsadm -yansible k8s-node -a 'yum install conntrack-tools ipvsadm -y'
Distributing Kube-proxy Programs
ansible k8s-node -m copy -a 'src=/usr/local/src/kubernetes/server/bin/kube-proxy dest=/usr/local/kubernetes/bin/kube-proxy mode=0755'
Create a kube-proxy systemd unit file
Create a working directory
mkdir -p /var/lib/kube-proxyansible k8s-node -m file -a 'path=/var/lib/kube-proxy state=directory'
Create log directory
mkdir /var/log/kubernetes/kube-proxyansible k8s-node -m file -a 'path=/var/log/kubernetes/kube-proxy state=directory'
SYSTEMD Unit
cat > Kube-proxy.service <<eof[unit]description=kubernetes kube-proxy Serverdocumentation=https://github.com/googlecloudplatform/kubernetesafter=network.target[service] Workingdirectory=/var/lib/kube-proxyexecstart=/usr/local/kubernetes/bin/kube-proxy \--bind-address= 192.168.16.238 \--hostname-override=k8s-n1-16-238 \--cluster-cidr=10.254.0.0/16 \--kubeconfig=/etc/kubernetes/ Kube-proxy.kubeconfig \--masquerade-all \--feature-gates=supportipvsproxymode=true \--proxy-mode=ipvs \--ipvs-min- sync-period=5s \--ipvs-sync-period=5s \--ipvs-scheduler=rr \--logtostderr=false \--log-dir=/var/log/kubernetes/kub e-proxy/\--v=2restart=on-failurerestartsec=5limitnofile=65536[install]wantedby=multi-user.targeteof
- The--hostname-override parameter value must be the same as the value of Kubelet, otherwise the Node will not be found after the Kube-proxy is started, and no iptables rule will be created
- --CLUSTER-CIDR must be consistent with Kube-apiserver's--service-cluster-ip-range option value
- Kube-proxy determines the internal and external traffic of the cluster according to the--CLUSTER-CIDR, and kube-proxy the request to access the Service IP after specifying the--CLUSTER-CIDR or--masquerade-all option SNAT
- --kubeconfig the specified configuration file embeds the Kube-apiserver address, user name, certificate, secret key and other requests and authentication information
- The predefined rolebinding cluster-admin binds the user system:kube-proxy to role System:node-proxier, which grants the call to Kube-apiserver proxy related Permissions for the API
Distribute
ansible k8s-node -m copy -a 'src=/root/kube-proxy.service dest=/usr/lib/systemd/system/kube-proxy.service' # 记录逐一修改ip和主机名
Start Kube-proxy
ansible k8s-node -m systemd -a 'daemon-reload=yes enabled=yes name=kube-proxy state=started'
Check Ipvs
# ipvsadm -L -nIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.254.0.1:443 rr persistent 10800 -> 192.168.16.235:6443 Masq 1 0 0 -> 192.168.16.236:6443 Masq 1 0 0 -> 192.168.16.237:6443 Masq 1 0 0
[k8s Cluster Series-06] Kubernetes Node Deployment