Kubernetes (k8s) Installation deployment process (vi)--node node deployment

Source: Internet
Author: User
Tags k8s

Hi,everybody, I came back, before the installation to flannel, the article has not been updated, even a lot of small partners to add QQ asked whether to continue to update,

Here is the reason, I was in the deployment of 1.91node when there are a variety of problems, resulting in node startup Oh, master always do not see, the problem is probably

1, Virtual machine time synchronization inconsistency problem, causing ETCD to create a resource is unsuccessful

2, node nodes can not automatically create kubelet.kubeconfig problem, this is the most serious problem, because the config file is not copied to the/etc/kubernetes folder in node, because Kubelet initiates the call

Kubelet configuration file will also call this file at the same time, see Kubelt Servier file Configuration method, this file is automatically generated. If there is no automatic production, check all configuration parameters and error, especially config and kublet files.

3, the config file is not you from the client copy from the time directly can be used, need to modify the master address inside, because Apiserver configuration start parameter binding address in the secure access address is 10.10.90.105:6443, Unsafe is 127.0.0.1:8080, here can be understood as a simple 6443 is a secure port, but only listen on the master's 10.10.90.105 IP, so to modify the config configuration file node in the master address is 10.10.90.90.105:6443, and if your master node is also node nodes, I tested this form, then your config file can only be accessed with 127.0.0.1:8080, using 6443 is not possible, That is, the local and other machines to access the apiserver in different ways, otherwise the log will be mad error can not connect the API, note here, if node re-use the master node colleagues need to restart the scheduler and control services.

4, Failed at step CHDIR spawning/usr/local/bin/kubelet:no such file or directory is not created/var/lib/kubelt folder

5, the configuration process must be shut down the firewall, SELinux, prevent the virtual machine Restart these services automatically restart.

6, 1.8 After the Kubelet configuration file does not need--api-servers parameters, please comment out!!

7. Swap partition Please comment out the/etc/fstab and restart the virtual machine and all services.

8, node nodes involved in the Docker service file modification, here is prone to problems, the article I will introduce.

Master Article Description:

Front of the master node because of previous configuration problems, the use of other apiversion, found a lot of problems, and the lack of node authentication method, I have updated this article Wenzheng, please small friends to see the relevant Wenzheng

and restart Apiserver service, another 1.9 kubelet start parameters and 1.8 also a little change, if you find you/var/log/message article UF unknown flag error, is your parameters do not recognize, you can refer to my article to see which parameters are different.

OK, the above is just some of the things I still have the impression of attention, others please check the log file error.

Not much nonsense to say, directly began to configure node nodes, again feeling everyone's support and wait!

1. It is important to check whether the 2 node configuration files and SSL certificates are complete.

Note that SSL has several kubelet files, which are automatically generated.

2. Configuring Docker's service Files

Because Docker federated flannel is required, it is necessary to modify the Docker service services file

We are in front of the flannel plugin is installed through the Yum mode, modified as follows:

Modify the Docker configuration file/usr/lib/systemd/system/docker.service, add an environment variable configuration: Environmentfile=-/run/flannel/docker

Also add a parameter for start--exec-opt native.cgroupdriver=systemd, here systemd and Kubelet configuration file inside the --cgroup-drive same can, otherwise kubelet start error

After modifying the configuration parameters, restart the Docker service

Systemctl Restart Docekr

3, installation Kubelet tools and configuration

Kubelet is a node installation tool, we still can be found from the bin of the server package we downloaded earlier, but also need to kube-proxy ask, can upload the folder/usr/local/bin file

and give executable permissions.

Note: Swap partitions must be commented out and restarted on the server.

Before we configure it, we need to do the following on the master node to create the authentication role:

cd/etc/kuberneteskubectl Create clusterrolebinding kubelet-bootstrap   --clusterrole=system: node-bootstrapper   --user=kubelet-bootstrap

After the created succeeds, we return to the node operation:

We have obtained the bin file and started to configure the appropriate server file

To add a configuration file Kubelt:

cd/etc/kubernetesCat> Kubelet <<eof##### kubernetes Kubelet (Minion) config### the address forTheInfoServer to serve on (set to0.0.0.0Or""  forAll interfaces) Kubelet_address="--address=10.10.90.106"# # # The Port forTheInfoserver to serve On#kubelet_port="--port=10250"# # Leave this blank to use the actualhostnameKubelet_hostname="--hostname-override=10.10.90.106"# # # location of the API-server## COMMENT this on KUBERNETES1.8+#KUBELET_API_SERVER="--api-servers=http://172.20.0.113:8080"# # # POD Infrastructure Containerkubelet_pod_infra_container="--pod-infra-container-image=pause-amd64:3.0"# # # ADD your own!Kubelet_args="--cgroup-driver=systemd--cluster-dns=10.254.0.2--experimental-bootstrap-kubeconfig=/etc/kubernetes/ Bootstrap.kubeconfig--kubeconfig=/etc/kubernetes/kubelet.kubeconfig--cert-dir=/etc/kubernetes/ssl-- Cluster-domain=cluster.local--hairpin-mode Promiscuous-bridge--serialize-image-pulls=false"EOF

Description: The IP address inside the node is the IP address, the other nodes corresponding to the good, note that Kubelet_api_server has been in 1.8 when the time is not used. Comment out.

Kubelet_pod_infra_container is the base image for the specified POD run and must be present, and here I directly specify a local mirror, and perhaps the address of the image is:
Docker Pull Registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
Download to the local tag, easy to use, of course, you can also add other public pod base image, online address is OK, be careful not to be wall. Docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 PAUSE-AMD64:3.0

Add the Kubelt service file/usr/lib/systemd/system/kubelet.service
The contents are as follows:
[Unit]description=Kubernetes Kubelet serverdocumentation=https://github.com/googlecloudplatform/kubernetesAfter=Docker.servicerequires=docker.service[service]workingdirectory=/var/lib/Kubeletenvironmentfile=-/etc/kubernetes/Configenvironmentfile=-/etc/kubernetes/Kubeletexecstart=/usr/local/bin/kubelet $KUBE _logtostderr $KUBE _log_level $KUBELET _api_server $KUBELET _address $KUBELET _port $KUBELET _hostname $KUBE _allow_priv $KUBELET _pod_inf Ra_container $KUBELET _argsrestart=on-Failure[install]wantedby=multi-user.target

Add working directory: Do not add start error

mkdir /var/lib/kubelet

Start Kubelt:

Systemctl daemon-reloadsystemctl enable kubeletsystemctl start Kubeletsystemctl status Kubelet

4. Accept node Request

After startup, if the policy automatically sends a validation join request to the master node, we operate on the master node:

Kubectl get CSR #此命令可以看到所有请求, all for the pending state, is the KUBECTL certificate approve node that needs to be approved name

#此命令可以通过请求

I was already approve, showing the status of approved and issued. It's normal.

Command extensions:

KUBECTL Delete CSR node name #删除单个节点的请求

Kubectl Delete CSR--all #删除所有节点请求

Kubectl Delete Nodes node name #删除加入的节点

Kubectl Delete Nodes--all #删除所有节点

5. Configure Kube-proxy Service

Now install a tool conntrack, specifically what is not very clear:

Yum Install -y conntrack-tools

Create Kube-proxy service profile, path/usr/lib/systemd/system/kube-proxy.service, content:

[unit]description=kubernetes kube-Proxy serverdocumentation=https://github.com/ Googlecloudplatform/kubernetesafter=network.target[service]environmentfile=-/etc/kubernetes/  Configenvironmentfile=-/etc/kubernetes/proxyexecstart=/usr/local/bin/kube-Proxy         $KUBE _ Logtostderr         $KUBE _log_level         $KUBE _master         $KUBE _proxy_argsrestart=on-failurelimitnofile =65536[Install]wantedby=multi-user.target

Add configuration file/etc/kubernetes/proxy: content is:

proxy config# Default Config should be adequate# Add your own! Kube_proxy_args="--bind-address=10.10.90.106--hostname-override=10.10.90.106--kubeconfig =/etc/kubernetes/kube-proxy.kubeconfig--cluster-cidr=10.254.0.0/16"

IP can be modified to native IP.

Precautions:

The--hostname-override parameter value must be the same as the value of Kubelet, otherwise the Node will not be found after the Kube-proxy is started, and no iptables rule will be created;
Kube-proxy determine the internal and external traffic of the cluster according to the--CLUSTER-CIDR, the--CLUSTER-CIDR or--masquerade-all option will be kube-proxy before the request to access the Service IP is SNAT;
--kubeconfig the specified configuration file embeds the Kube-apiserver address, user name, certificate, secret key and other requests and authentication information;
The predefined rolebinding cluster-admin binds the user system:kube-proxy to role System:node-proxier, which grants the call to Kube-apiserver proxy related The permissions of the API;

To start the Proxy service:

Systemctl daemon-reloadsystemctl enable Kube-proxysystemctl start kube-proxysystemctl status Kube-proxy

6. Verification Test

We can create an Nginx deployment to verify that the cluster is healthy:

#delete是清理不用的所有pods, service, and deployment, kubectl Delete pods must not be performed--allkubectl Delete Service--allkubectl Delete Deployment--All here is the test of an Nginx cluster deployment1, define the cluster and start Kubectl run Nginx--replicas=3--labels="Run=load-balancer-example"--image=nginx--port= the2, defining the Cluster service Kubectl expose deployment Nginx--type=nodeport--name=example-Service3, view service information Kubectl Describe Svc example-service
4, check pod status, all for running is normal, otherwise use kubectl describe pods {problematic pod name} to see the specific error.
Kubectl Get Pods

Other Computers access node node +ip is accessible, there are other types of service created to choose from.

Kubernetes (k8s) Installation deployment process (vi)--node node deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.