K8s Core Concept Detailed

Source: Internet
Author: User
Tags etcd k8s

Kubernetes (commonly referred to as k8s) is a container orchestration tool for managing applications running in containers.

Kubernetes not only has everything you need to support complex container applications, it is also the most convenient development and operational framework on the market.

Kubernetes works by grouping containers to split an application into multiple logical units for ease of management and discovery. It is particularly useful for microservices applications made up of small, independent services.

Although Kubernetes runs on Linux, he is actually platform-agnostic and can run on bare metal, virtual machines, cloud instances, or OpenStack.

Cluster
Cluster is a collection of compute, storage, and network resources that Kubernetes use to run a variety of container-based applications.

Master
Master is the brain of Cluster, whose primary responsibility is scheduling, that is, deciding where to put the application to run. Master runs the Linux operating system, which can be either a physical machine or a virtual machine. To achieve high availability, you can run multiple Master.

Node
Node's job is to run the container app. node is managed by master and node is responsible for monitoring and reporting on the status of the container and managing the lifecycle of the container according to master's requirements. Node runs on the Linux operating system, either as a physical or virtual machine.

In the previous interactive tutorial we created Cluster with only one host host01,
It is both Master and Node.

Pod
The Pod is the smallest unit of work for Kubernetes. Each Pod contains one or more containers. The container in the Pod is scheduled to run as a whole by Master on a Node.

Kubernetes introduced Pod is based on the following two purposes:

Manageability.
Some containers are inherently required to work together in close contact. Pods provide a higher level of abstraction than containers and encapsulate them in a deployment unit. Kubernetes schedules, expands, shares resources, and manages lifecycles with Pod as the smallest unit.

Communication and resource sharing.
All containers in the POD use the same network namespace, which is the same IP address and Port space. They can communicate directly with localhost. Similarly, these containers can share storage, and when Kubernetes mounts volume to the pod, it essentially mounts volume to each container in the pod.

There are two ways to use Pods:

Run a single container.
One-container-per-pod is the most common model of Kubernetes, in which case it simply encapsulates a single container into pods. Even if there is only one container, Kubernetes manages pods rather than managing containers directly.

Run multiple containers.
But the question is: which containers should be placed in a Pod?
The answer is: These containers must be very tightly connected and require direct sharing of resources.

Controller
Kubernetes typically do not create pods directly, but instead use controllers to manage pods. The Controller defines the deployment characteristics of the Pod, such as having several copies, running on what Node, and so on. In order to meet different business scenarios, Kubernetes provides a variety of controllers, including Deployment, Replicaset, Daemonset, Statefuleset, Job, etc., which we discuss one by one.

Deployment is the most commonly used Controller, such as in the previous online tutorial, by creating Deployment to deploy the app. Deployment can manage multiple copies of the pod and ensure that the pod runs in the desired state.

Replicaset enables multi-copy management of pods. Replicaset is created automatically when using Deployment, which means that Deployment manages multiple copies of the Pod through Replicaset, and we don't usually need to use replicaset directly.

Daemonset is used for scenes where each Node runs a maximum of one Pod copy. As its name suggests, Daemonset is typically used to run daemon.

Statefuleset ensures that each copy of the Pod has a constant name throughout its lifecycle. Other controllers do not provide this feature, and when a pod fails to be removed and restarted, the name of the pod changes. At the same time Statefuleset will ensure that copies are started, updated, or deleted in a fixed order.

The Job is used to delete the app at the end of the run. The pods in other controllers are usually long-term, continuous operation.

Service
Deployment can deploy multiple replicas, each Pod has its own IP, how does the outside world access these replicas?

Through the IP of Pod?
To know that pods are likely to be destroyed and restarted frequently, their IP will change and access to IP is less realistic.

The answer is Service.
The Kubernetes Service defines how the outside world accesses a specific set of pods. The service has its own IP and port, and the service provides load balancing for the Pod.

The Kubernetes Run container (pod) and Access container (POD) are executed by the Controller and Service, respectively.

Namespace

If you have multiple users or project groups using the same Kubernetes Cluster, how do you separate the controllers, pods, and other resources that they create?

The answer is Namespace.
Namespace can logically divide a physical Cluster into multiple virtual Cluster, each Cluster a Namespace. The resources in different Namespace are completely isolated.

Kubernetes creates two Namespace by default.

Default-If you do not specify when creating the resource, it will be placed in the Namespace.

Kube-system-The system resources created by Kubernetes will be placed in this Namespace.

API Server (Kube-apiserver)

API Server provides the HTTP/HTTPS RESTful API, the Kubernetes API. API Server is the front-end interface of the Kubernetes Cluster, with various client tools (CLI or UI) and Kubernetes other components that can manage the various resources of Cluster.

Scheduler (Kube-scheduler)

Scheduler is responsible for deciding which Node to run the Pod on. The Scheduler will fully consider the Cluster topology, the load of the current nodes, and the application's demand for high availability, performance and data affinity.

Controller Manager (Kube-controller-manager)

Controller Manager is responsible for managing Cluster resources to ensure that resources are in the expected state. Controller Manager consists of a variety of controllers, including replication controller, endpoints controller, namespace controller, Serviceaccounts controller and so on.

Different controllers manage different resources. For example, replication controller manages the life cycle of Deployment, Statefulset, Daemonset, namespace controller manages namespace resources.

Etcd

ETCD is responsible for saving the configuration information of Kubernetes Cluster and the status information of various resources. When data changes, Etcd quickly notifies Kubernetes related components.

Pod Network

To be able to communicate with each other, the POD network must be deployed Kubernetes Cluster, and flannel is one of the options available.

Kubelet
Kubelet is the agent of node, and when Scheduler determines that the pod is run on a node, the specific configuration information (image, volume, and so on) of the pod is sent to the kubelet,kubelet of the node to create and run the capacity based on that information. and report the running status to Master.

Kube-proxy

The service logically represents multiple pods on the backend and the outside world accesses the pod through a service. How does the service receive the request forwarded to the Pod? This is the work that kube-proxy to accomplish.

Each Node runs the Kube-proxy service, which is responsible for sending the TCP/UPD data flow that accesses the service to the back-end container. If there are multiple replicas, kube-proxy is load balanced.

K8s Core Concept Detailed

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.