"calico-node" createdclusterrolebinding "calico-node" created
Configure Calico$ vim calico.yamldata:#Configure this with the location of your etcd cluster.etcd_endpoints: "https://10.3.1.15:2379,https://10.3.1.16:2379,https://10.3.1.17:2379"#If you‘re using TLS enabled etcd uncomment the following.#You must also populate the Secret below with these files. etcd_ca: "/calico-secrets/
selected does not respond, the iptables agent cannot automatically retry the other pod, so it depends on having a working readiness probe.
6, Kube-dns IntroductionKube-dns is used to assign subdomains to the Kubernetes service, which can be accessed by name in the cluster, and typically kube-dns give the service a name of " Service name. Namespace.svc.cluster.local "A record used to parse the service's Clusterip.
Kube-dns components:
Th
entry for the entire system and provides interfaces using rest api services.
Kube-controller-manager: used to execute background tasks throughout the system, including node status, number of Pods, and association between Pods and services.
Kube-scheduler (scheduling a Pod to a Node): responsible for Node resource management. It accepts the Pods task created by kube-apiserver and assigns it to a Node.
Etcd: responsible for service discovery and con
the entrance of the kubernetes system, it encapsulates the additions and deletions of core objects, which are provided to external customers and internal component calls in a restful interface. The rest objects it maintains are persisted to ETCD, a distributed, strongly-consistent key/value store.
Scheduler: Responsible for the resource scheduling of the cluster, assigning the machine to the
address the operational automation of the production environment, followed by the container build problem (i.e., CI/CD). Our network selection is flannel, million trillion network, flannel although there is performance loss, but far to meet our actual needs. Storage we use Ceph's RBD way, for more than a year, RBD's program is very stable. The way Ceph FS is we have tried, but it has not been formally used due to limited team effort and possible risks.
Highly Available infrastructureContainer
Kubernetes Cluster Deployment DNS ServiceIn Kubernetes each service will be assigned a virtual IP, each service under normal circumstances will not change for a long time, this is relative to the pod of the indefinite IP, the use of the cluster app is relatively stable.But the service's information injected into the pod is currently using an environment variable, and relies heavily on pod (RC) and service c
the concepts in Kubernetes node, Pod, Replication Controller, service, etc. can be considered as a "resource object", almost all resource objects can be implemented through the Kubectl tool (API call) to perform an increase, delete, change, Check the operation and save it in ETCD for persistent storage. From this point of view, Kubernetes is actually a highly au
;Functional ComponentsAs shown in the official documentation for the cluster architecture diagram, a typical master/slave model.650) this.width=650; "src=" Http://resource.docker.cn/architecture.png "alt=" alt "/>Master runs three components:
Apiserver : As the entrance of the kubernetes system, it encapsulates the additions and deletions of the core objects, which are provided to external customers and internal component calls in the RE
Kubernetes cluster configuration notes
This article describes how to configure a Kubernetes cluster. A kubernetes cluster consists of a master node and a slave node.
Run the following services on the Master node:Etcd (the etcd service can also be run independently, not necessarily on the Master node)Kube-apiserverKube-
Label:.gzetcdrefatitarget toolarturllis # Backup ETCD data etcdctl backup--DATA-DIR/VAR/LIB/ETCD/DEFAULT.ETCD--backup-dir/root/etcd71 ETCD backup uses ETCD command etcdctl for etc Backup, the script is as follows: #!/bin/bashdate_time= ' date +%y%m%d ' Etcdctl backup--data-dir/var/lib/
This article describes how to quickly deploy a set of kubernetes clusters, so let's get started quickly!
Preparatory work//关闭防火墙systemctl stop firewalld.servicesystemctl disable firewalld.service//关闭selinux,修改/etc/selinux/configSELINUX=disabledMachine Deployment Planning
Host
IP
Deploying Components
Master Master Node
192.168.199.206
ETCD, Kube
apiserverkube_master="--master=http:// 192.168.5.221:8080 "Five, disable the firewallSystemctl Disable iptables-services firewalldsystemctl stop iptables-services FIREWALLDVi. Configuring the Kubernetes service on the master nodeModify the configuration file/etc/etcd/etcd.conf, make sure ETCD listens to all addresses, modify the following:Etcd_name=defaultetcd_d
-subnet isolation, network auditing/firewalls and security groups
Next we look at the details of each function point.Kubernetes Kernel Depth CustomizationBased on kubernetes kernel depth customization, the Ecos-kubernetes platform cluster includes the master, ETCD, and node three roles.
MASTER role : As the host node of the cluster, run a collection of thre
Refer to my previous article (Click here) to introduce a key issue in the etcd cluster environment:
Which of the three etcd nodes should be accessed by clustering )???
(1) Any of the three read operations can be performed, even if it is not a leader
(2) For write operations, it seems that only the leader can be connected to write data.
I have a cluster composed of three nodes (127.0.0.1: 4001, 127.0.0.1:
New to buy notebook, reinstall ETCD, record it;
Three systems are centos7.3 virtual machines: IP addresses are: 192.168.23.128-130
Introduce a more clumsy way, suitable for beginners
Installation steps:
1) Yum Install-y Etcd
2 Modify the 9 parameters in the configuration file
Node1 node:
[Root@bxhvm01 ~]# grep-v "^#"/etc/etcd/etcd.conf etcd_name=etcd01 "etcd_dat
arranging a host for it to write information to the ETCD. Of course, the things to deal with in this process is far from simple, need to consider a variety of decision factors, such as the same replication controller pod allocated to different hosts, to prevent the host node downtime on the business caused by a great impact, and how to consider resource balance, Thus, the resource utilization rate of the whole cluster is improved. Scheduling Process
Keystone do things.The created object is then stored in the ETCD, and if OpenStack is inside the database.Then the scheduler, the object is dispatched to a machine, the equivalent of Nova-scheduler to do things.Then the Kubelet on each machine was really working and found himself being dispatched to create a container on its own machine, equivalent to Nova-compute.Kubelet you create a container, you first download the container image, and nova-comput
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.