Kubernetes Master Deployment Controllermanager Deployment (4)

Source: Internet
Author: User
Tags k8s

Controller manager, as the Management control center within the cluster, is responsible for the resource management in the cluster, including node, Pod, namespace, and rated resources. For example, when an unexpected node outage occurs, Controller Manager will promptly discover and perform an automated fix.

First, deploy k8s Controller Manager

Make sure Controller-manager-key.pem and Controller-manager.pem exist, and I've created the relevant private key here in the previous article. Perform the following actions:

cd/etc/kubernetesexport kube_apiserver="https://192.168.15.200:6443"

Configure cluster

Kubectl config set---certificate-authority=/etc/kubernetes/ssl/--embed-certs=true  --server=--kubeconfig=controller-manager.conf

Configure credentials

Kubectl config set-credentials system:kube-controller---client-certificate=/etc/kubernetes/ssl/ controller---embed-certs=true--client-key=/etc/kubernetes/ssl/controller-manager-  --kubeconfig=controller-manager.conf

Configure context

Kubectl config set-context system:kube-controller---cluster=--user=system:kube-controller-  --kubeconfig=controller-manager.conf

Configure default context

Kubectl config use-context system:[email protected]--kubeconfig=controller-manager.conf

After the controller-manager.conf file is generated, the file is distributed to the/etc/kubernetes directory of each Master node.

Controller-manager.conf k8s-master03:/etc/kubernetes/

Create the Kube-controller-manager systemd service startup file as follows:

Export kube_apiserver="https://192.168.15.200:6443"Cat>/usr/lib/systemd/system/kube-controller-manager.service <<eof[unit]description=kube-controller-Managerafter=Network.targetafter=kube-Apiserver.service [Service]environmentfile=-/etc/kubernetes/controller-Managerexecstart=/usr/local/bin/kube-controller-Manager--logtostderr=true         --v=0         --master=https://192.168.15.200:6443 \--kubeconfig=/etc/kubernetes/controller-manager.conf--cluster-name=kubernetes--cluster-signing-cert-file=/etc/kubernetes/ssl/Ca.pem--cluster-signing-key-file=/etc/kubernetes/ssl/ca-Key.pem--service-account-private-key-file=/etc/kubernetes/ssl/ca-Key.pem--root-ca-file=/etc/kubernetes/ssl/Ca.pem--insecure-experimental-approve-all-kubelet-csrs- for-group=system:bootstrappers--use-service-account-credentials=true         --service-cluster-ip-range=10.96.0.0/ A         --cluster-cidr=10.244.0.0/ -         --allocate-node-cidrs=true         --leader-elect=true         --controllers=*, Bootstrapsigner,tokencleanerrestart=on-Failuretype=Simplelimitnofile=65536EOF

Distribute to other hosts:

SCP /usr/lib/systemd/system/kube-controller-manager.service k8s-master02://USR/LIB/SYSTEMD /system/SCP /usr/lib/systemd/system/kube-controller-manager.service k8s-master03://  usr/lib/systemd/system/

Start the service on each node:

Systemctl daemon-reloadsystemctl enable Kube-controller-managersystemctlstart kube-controller- managersystemctl status Kube-controller-manager

Check the status of the service, whether it starts normally;

Tip: Be sure to fill in the VIP in the Apiserver-csr.json defined certificate authorization IP content, if not filled in, re-use CFSSL to generate the certificate and put the/ETC/KUBERNETES/PKI on each master node and restart Kub E-apiserver.service Services. Finally, there are some questions about k8s Apiserver high availability that need to be summarized here, and we've got some kind of apiserver service available here through Corosync + pacemaker software, This is because Apiserver is a stateless service that allows different nodes to exist at the same time. And Scheduler,controller-manager in the cluster can only be opened in one host, here we can start Controller-manager and scheduler just add--leader-elect=true Parameters can be started at the same time, the system will automatically elect leader. Specifically, if you have more than one master node, Scheduler,controller-manager is installed on it, Apiserver:

    • For Schduler services, only one master node is running at the same time;
    • For Controller-manager, the same time only runs on one master node;
    • For Apiserver, the same time only runs on one master node;

In the context of the whole environment, we will introduce a new problem. Pacemaker + Corosync High availability who's to be assured? Because the high-availability cluster has a brain fissure or administrator misoperation, once the high-availability cluster has a brain fissure, multiple master will appear to preempt the VIP and on multiple nodes at the same time to start the scheduler or Controller-manager, will cause system failure.

Kubernetes Master Deployment Controllermanager Deployment (4)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.