Kubernetes architecture and component introduction of open-source container Cluster Management System

Source: Internet
Author: User
Tags etcd cadvisor

Kubernetes architecture and component introduction of open-source container Cluster Management System

This article is based on an Infoq article (see the reference section) and has been modified based on your understanding in difficult areas. For more information about deploying kubernetes on Ubuntu, see.

Together we will ensure that Kubernetes is a strong and open container management framework for any application and in any environment, whether in a private, public or hybrid cloud. --Urs Hölzle, Google

As an important member of the Docker ecosystem, Kubernetes is an open-source version of Google's large-scale container management technology for many years. It is the best practice of production line. As Urs hölzle said, whether it is a public cloud, a private cloud or even a hybrid cloud, Kubernetes will serve as a container management framework for any application, and any environment is everywhere. Because of this, it is favored by major giants and start-ups, such as Microsoft, VMWare, Red Hat, CoreOS, and Mesos, who have joined Kubernetes to contribute code. With the continuous improvement and development of the Kubernetes community and major manufacturers, Kubernetes will become a leader in the container management field.

Next we will explore what Kubernetes is, what it can do, and how it is done.

1. What is Kubernetes

Kubernetes is an open-source container cluster management system developed by Google. It provides application deployment, maintenance, scaling mechanisms, and other functions. Kubernetes allows you to conveniently manage containerized applications running across machines, its main functions are as follows:

  1. Use Docker to package applications, instantiate, and run applications ).
  2. Run and manage containers across machines in a cluster.
  3. Solve the problem of inter-machine container communication between Docker.
  4. Kubernetes's self-repair mechanism makes the container cluster always run in the desired state.

Kubernetes supports GCE, vShpere, CoreOS, OpenShift, Azure, and other platforms. In addition, Kubernetes can also run directly on physical machines.

The complete architecture diagram provided by the official website: (you can enlarge it)

2. Main concepts of Kubernetes 2.1 Pods

In the Kubernetes system, the smallest granularity of scheduling is not simply a container, but a Pod. A Pod is the smallest deployment unit that can be created, destroyed, scheduled, and managed. One or more containers (Container) constitute a Pod. Generally, the containers in the Pod run the same application. Pods contain containers running on the same Minion (Host) as a unified management unit, sharing the same volumes and network namespace/IP and Port space.

2.2 Services

Services is also the basic operating unit of Kubernetes and the abstraction of real application Services. Each Service is backed by many corresponding containers, the Proxy port and service selector determine whether the service request is passed to the backend container that provides the service. The external container is represented as a single access address, and the external container does not need to know how the backend runs, this brings great benefits to the expansion or maintenance of the backend.

The official website documentation services. md on github makes it very clear.

2.3 Replication Controllers

Replication Controller is a more complex form of pods. It ensures that a specified number of pod replicas are running in the Kubernetes cluster at any time. If there are fewer pod replicas ), replication Controller will start a new iner, and vice versa, it will kill redundant ones to ensure the number remains unchanged. Replication Controller creates pods using a pre-defined pod template. Once the pods template is created successfully, the pod template is not associated with the created pods. You can modify the pod template without affecting the created pods, you can also directly update the pods created through Replication Controller. For pods created using the pod template, Replication Controller associates the pods according to the label selector. By modifying the pods label, the corresponding pods can be deleted. Replication Controller has the following usage:

Rescheduling
As described above, Replication Controller ensures that the specified pod copy (replicas) in the Kubernetes cluster is running, even when a node error occurs.

Scaling
You can scale up or down the running pods horizontally by modifying the number of replicas of the Replication Controller.

Rolling updates
The design principle of Replication Controller allows you to replace pods one by one to roll Update (rolling updates) services.

Multiple release tracks
If you need to run the multiple release service in the system, Replication Controller uses labels to differentiate multiple release tracks.

The preceding three concepts are available REST objects. Kubernetes is processed using RESTfull APIs.

2.4 Labels

Service and replicationController are only abstraction built on the pod and ultimately act on the pod. How do they relate to the pod? This introduces the label Concept: label is actually very easy to understand, that is, adding a set of key/value tags that can be used for search or association to the pod, service and replicationController are associated with pods through label. To forward Service access requests to multiple backend containers that provide services, you can select the right container by identifying the labels of the container; replication Controller also uses labels to manage a group of containers created through the pod template, so that Replication Controller can easily and conveniently manage multiple containers.

As shown in, there are three pods with the label "app = backend". You can specify the same label: "app = backend" when creating the service and replicationController, and then use the label selector mechanism, then they are associated with the three pods. For example, when other frontend pods access the service, they are automatically forwarded to one of the backend pods.

3. Kubernetes Components

The overall Kubenetes framework includes kubecfg, Master API Server, Kubelet, Minion (Host), and Proxy.

3.1 Master

The Master defines the main declarations of the Kubernetes cluster Master/API Server, including Pod Registry, Controller Registry, Service Registry, Endpoint Registry, Minion Registry, Binding Registry, RESTStorage, and Client, is the portal for the client (Kubecfg) to call the Kubernetes API to manage the main components of Kubernetes, such as Pods, Services, Minions, and containers. The Master node is composed of API Server, schedstry, and Registry. The Master workflow consists of the following steps:

  1. Kubecfg sends specific requests, such as creating pods, to the Kubernetes Client.
  2. The Kubernetes Client sends the request to the API server.
  3. The API Server selects the REST storage API based on the request type, for example, the Storage type is pods when the Pod is created, and processes the request accordingly.
  4. The REST Storage API processes the requests accordingly.
  5. Save the processing result to the Etcd of the high-availability key-value storage system.
  6. After the API Server responds to the Kubecfg request, schedion obtains the running Pod and Minion information in the Cluster Based on the Kubernetes Client.
  7. Based on the information obtained from the Kubernetes Client, schedion distributes undistributed pods to available Minion nodes.


The following describes the main components of the Master.

3.1.1 Minion Registry

Minion Registry is responsible for tracking the number of Minion (Host) in the Kubernetes cluster ). Kubernetes encapsulates Minion Registry to implement the RESTful api rest of the Kubernetes API Server. Through these APIs, we can perform Create, Get, List, and Delete operations on the Minion Registry, because Minon can only be created or deleted, the Update operation is not supported and the configuration information of Minion is stored in etcd. In addition, the schedion algorithm determines whether to distribute new pods to the Minion Node Based on the Minion resource capacity.

You can usecurl http://{master-apiserver-ip}:4001/v2/keys/registry/minions/To verify the content stored in etcd.

3.1.2 Pod Registry

The Pod Registry is responsible for tracking how many pods are running in the Kubernetes cluster and how these pods are mapped to Minion. Encapsulate Pod Registry and Cloud Provider Information and other related information into a RESTful api rest for Kubernetes API Server. Through these APIs, we can Create, Get, List, Update, and Delete pods, and store Pod information to etcd, you can also use the Watch interface to monitor Pod changes, such as when a Pod is created, deleted, or updated.

3.1.3 Service Registry

Service Registry is responsible for tracking all services running in the Kubernetes cluster. The Service Registry is encapsulated into the RESTful api rest required by the Kubernetes API Server Based on the provided Cloud Provider and Minion Registry information. Using these interfaces, we can Create, Get, List, Update, and Delete services, watch operations that monitor Service changes, and store Service information to etcd.

3.1.4 Controller Registry

The Controller Registry is responsible for tracking all Replication controllers in the Kubernetes cluster. Replication Controller maintains a specified number of pod copies (replicas). If one of the containers dies, replication Controller automatically starts a new container. If the dead container recovers, it will kill the extra container to ensure that the specified copy remains unchanged. By encapsulating Controller Registry to implement RESTful api rest of Kubernetes API Server, we can perform Create, Get, List, Update, and Delete operations on Replication Controller, and watch operations that monitor changes in the Replication Controller, and store the Replication Controller information to etcd.

3.1.5 Endpoints Registry

Endpoints Registry is responsible for collecting Service Endpoints, such as Name: "mysql", Endpoints: ["10.10.1.1: 1909", "10.10.2.2: 8834"], the same as Pod Registry, the Controller Registry also implements the RESTful API interface of the Kubernetes API Server. It can perform Create, Get, List, Update, Delete, and watch operations.

3.1.6 Binding Registry

Binding includes the ID of the Pod to be bound and the Host to which the Pod is bound. After Scheduler writes the Binding Registry, the Pod to be bound is bound to a host. Binding Registry also implements the RESTful API interface of the Kubernetes API Server, but Binding Registry is a write-only object, which can only be used by the Create operation. Otherwise, an error occurs.

3.1.7 Scheduler

Schedion collects and analyzes the resource (memory, CPU) loads of all Minion nodes in the current Kubernetes cluster, and distributes the newly created pods to available nodes in the Kubernetes cluster accordingly. Once the resources of the Minion node are allocated to the Pod, some resources cannot be distributed to other pods unless these pods are deleted or exited, kubernetes needs to analyze the usage of all Minion Resources in the cluster to ensure that the distributed workload does not exceed the available resource range of the current Minion node. Specifically, Scheduler does the following:

  1. Monitors undistributed pods in a Kubernetes cluster in real time.
  2. Monitors all running pods in the Kubernetes cluster in real time. Scheduler needs to securely distribute undistributed pods to the specified Minion node based on the resource conditions of these pods.
  3. Schedion also monitors the Minion node information. Because the Minion node is frequently searched, schedion caches the latest information locally.
  4. Finally, after schedion distributes the Pod to the specified Minion node, it writes the Pod-related information Binding back to the API Server.
3.2 Kubelet


We can see that Kubelet is the connection point between each Minion and Master API Server in the Kubernetes cluster. Kubelet runs on each Minion and serves as a bridge between the Master API Server and Minion, the Master API Server receives commands and work assigned to it, interacts with the persistent key-value storage etcd, file, server, and http, and reads configuration information. Kubelet mainly manages the lifecycle of pods and containers, including Docker Client, Root Directory, Pod Workers, Etcd Client, Cadvisor Client, and Health Checker components. The specific work is as follows:

  1. Use Worker to asynchronously run a specific Action for the Pod
  2. Set container Environment Variables
  3. Bind Volume to the container
  4. Bind Port to container
  5. Run a single container Based on the specified Pod
  6. Killing containers
  7. Create a network container for the specified Pod
  8. Delete all pods
  9. Synchronous Pod status
  10. Obtain container info, pod info, root info, and machine info from the cAdvisor.
  11. Detect Pod container health status information
  12. Run the command in the container.
3.3 Proxy

The Proxy is designed to allow external networks to access the application services provided by containers across machine clusters and runs on each Minion. The Proxy provides a TCP/UDP sockets proxy. Each time a Service is created, the Proxy obtains the configuration information of Services and Endpoints from etcd (or from file ), then, start a Proxy process on Minion Based on the configuration information and listen to the corresponding service port. When an external request occurs, the Proxy will distribute the request to the correct backend container for Processing Based on Load Balancer.

Therefore, the Proxy not only solves the conflict between the same Service port on the same host machine, but also provides the Service forwarding Service port to provide external services, the Proxy backend uses the random and round-robin Load Balancing algorithms. For more information about kube-proxy, KUBERNETES code walk-through the minion node component KUBE-PROXY.

4. etcd

Etcd mentioned several times in the preceding architecture diagram, but it is not part of kubernetes. It is a service discovery project initiated by the CoreOS team to manage configuration information and service discovery, the goal is to build a highly available Distributed key-value database. Similar to kubernetes and docker, kubernetes is still a product in rapid iterative development and is not as mature as ZooKeeper. I have the opportunity to introduce it in another article.

Reference

  • Kubernetes System Architecture
  • An Introduction to Kubernetes
  • Kubernetes-DESIGN (Kubernetes DESIGN overview)
  • How to build a container Cluster

OpenStack, Kubernetes, and Mesos

Problems encountered during Kubernetes cluster construction and Solutions

For details about Kubernetes, click here
Kubernetes: click here

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.