Introduction of kubernetes container arrangement system

Source: Internet
Author: User
Tags etcd

Copyright notice: This article by Turboxu original article, reprint please indicate source:
Article original link: https://www.qcloud.com/community/article/152

Source: Tengyun https://www.qcloud.com/community

Kubernetes, an important member of the container orchestration ecosystem, is an open source version of Google's large-scale container management system (Borg) that draws on Google's experience and lessons learned in the production environment over the past decade. Kubernetes provides application deployment, maintenance, extension mechanisms and other functions, using kubernetes to easily manage containerized applications across machine operations. The current kubernetes supports GCE, Vshpere, CoreOS, OpenShift, and azure platforms, in addition to It can also be run directly on a physical machine. Kubernetes is an open container scheduling management platform that does not qualify any kind of speech, support Java/c++/go/python and other applications.
Kubernetes is a complete distributed system support platform, supporting multi-layered security protection, access mechanism, multi-tenant application support, transparent service registration, service discovery, built-in load balancing, powerful fault detection and self-repairing mechanism, service rolling upgrade and online expansion, extensible resource automatic scheduling mechanism, Multi-granularity resource quota management ability, perfect management tools, including development, testing, deployment, operation and maintenance monitoring, one-stop complete distributed system development and support platform.

I. System architecture

The kubernetes system is composed of master and node based on the nodes function.

Master
Master, as the control node, schedules and manages the entire system with the following components:
API server, as the portal of the Kubernetes system, encapsulates the additions and deletions of core objects, which are provided to external customers and internal component calls in a restful interface. The rest objects it maintains are persisted to ETCD.
Scheduler: Responsible for the resource scheduling of the cluster, assigning the machine to the new pod. This part of the work is divided into a component, which means that it is easy to replace it with other schedulers.
Controller Manager: Responsible for executing various controllers, there are currently two types:

    1. Endpoint Controller: Periodically correlate the service and POD (the association information is maintained by the Endpoint object), ensuring that service-to-pod mappings are always up-to-date.
    2. Replication Controller: Periodically correlate replicationcontroller and pods to ensure that the number of copies Replicationcontroller defined is always the same as the number of pods actually running.

Node
node is running the nodes, running the business container, containing the following components:
Kubelet: Responsible for controlling Docker containers, such as Start/stop, monitor operation status, etc. It periodically obtains pods assigned to the native from Etcd, and starts or stops the appropriate containers based on pod information. It also receives Apiserver HTTP requests to report the status of the pod.
Kube Proxy: Responsible for providing the agent for pod. It periodically obtains all the service from ETCD and creates an agent based on the service information. When a customer pod accesses another pod, the access request is forwarded by the native proxy.
Borrowing a net diagram to express the relationship between functional components:

Two. Basic Concept node

Node is a working host in the Kubernetes cluster relative to master, and is also known as Minion in earlier versions. Node can be a physical host, or it can be a virtual machine (VM). The service--kubelet for starting and managing pods is run on each node and can be managed by master. The service processes running on node include Kubelet, Kube-proxy, and Docker daemon.
Node information is as follows:
Node Address: IP address of the host or Nodeid
Node's running state: pending,running,terminated
Node Condition: Describes the operating conditions of the running state node, there is currently only one condition ready, which indicates that node is in a healthy state and can receive instructions from master to create the pod.
Node system capacity: Describes the system resources that node can use, including CPU, memory, maximum number of scheduled pods, etc.

Pod

Pod is the most basic operating unit of Kubernetes, including one or more tightly related containers, and a pod can be considered as the "logical Host" (Logical host) of the application layer by a containerized environment. Multiple container applications in a pod are usually tightly coupled. The pod is created, started, or destroyed on node.
Why kubernetes use Pods to encapsulate a layer on top of a container? One important reason is that communication between Docker containers is limited by the Docker network mechanism. In Docker, in the world, a container needs to link to access the service (port) provided by another container. The link between a large number of containers will be a very heavy job. The concept of Pod combines multiple containers in a virtual "host", enabling containers to communicate with each other only through localhost.
An app container in a pod shares a set of resources, such as:
PID namespaces: Different applications in the pod can see other process PID
Network namespaces: Multiple containers in the pod can access the same IP and port range
IPC Namespaces: Multiple containers in the pod can communicate using SYSTEMV IPC or POSIX Message Queuing.
UTS namespace: Multiple containers in the pod share a host name.
Volumes (Shared storage volume): Each container in the pod can access the Volumes defined at the pod level.

Label

Label is a core concept in the kubernetes system. Labels are attached to various objects in the form of Key/value key-value pairs, such as pods, service, RC, node, and so on. Label defines the recognizable properties of these objects, which are used to manage and select them. A label can be attached to an object when it is created, or it can be managed through the API after the object is created.
When you define a label for an object, other objects can use the label selector to define other objects.
The definition of a label selector consists of multiple comma-delimited conditions:
"Label": {
"Key1": "Value1",
"Key2": "value2"
}

Resource Controller (RC)

The Resource controller (RC) is a core concept in the Kubernetes system that defines the number of pod replicas. The controller manager process in master completes the creation, monitoring, and deactivation of pods through the definition of RC.
Based on the definition of replication controller, Kubernetes can ensure that the number of user-specified pod "copies" (replica) can be run at any time. If there are too many copies of the pod running, the system will stop some pods, and if the number of pods running is too small, the system will start some pods, in short, by the definition of RC, kubernetes always ensure that the number of user expected replicas is running in the cluster.

Service (services)

In the Kubernetes world, although each pod is assigned a separate IP address, the IP address disappears as the pod is destroyed. This begs the question: how do you access a group of pods that make up a cluster to serve them?
Kubernetes's service is the core concept used to solve this problem. A service can be seen as a set of external access interfaces for pods that provide the same service. The service acts on which pods are defined by the label selector.
The IP address of the pod is the Docker daemon allocated according to the IP address segment of the Docker0 Bridge, but the service's cluster IP address is the virtual IP address in the kubernetes system, which is dynamically allocated by the system. The service's Clusterip address is relatively stable relative to the pod's IP address, which is assigned an IP address when the service is created, and the IP address will no longer change until the service is destroyed.
Since the IP assigned to the service object in the cluster IP range pool can only be accessed internally, the other pods can access it with no obstructions. But if the service serves as a front-end, ready to serve clients outside of the cluster, we need to provide a public IP for this service.
Kubernetes supports two types definitions for services that are externally available: Nodeport and LoadBalancer.

Volume (storage volume)

Volume is a shared directory that can be accessed by multiple containers in the pod. Kubernetes's volume concept is similar to Docker's volume, but not exactly the same. The volume in Kubernetes is the same as the Pod life cycle, but is not related to the life cycle of the container. When the container terminates or restarts, the data in the volume is not lost. In addition, Kubernetes supports multiple types of volume, and one pod can use any number of volume at the same time.
(1) Emptydir: A emptydir volume is created when the pod is assigned to node. As you can see from its name, its initial content is empty. All containers in the same pod can read and write the same file in Emptydir. When the pod is removed from node, the data in the Emptydir is also permanently deleted.
(2) Hostpath: Mount the file or directory on the host on the pod. Typically used for:
The log files generated by the container application need to be permanently saved and can be stored using the host's high-speed file system;
A container application that accesses the internal data structures of the Docker engine on the host can be used to define the Hostpath as the host/var/lib/docker directory, allowing the container's internal application to directly access the Docker file system
(3) Gcepersistentdick: Use this type of volume to represent files on permanent disks (persistent DISK,PD) on the Google Compute engine (Google Compute engine, GCE). Unlike Emptydir, the content on PD is permanently saved, and when the pod is deleted, the PD is only unloaded (unmount), but it is not deleted. It is important to note that you need to create a permanent disk (PD) before you can use Gcepersistentdisk.
(4) Awselasticblockstore: Similar to GCE, this type of Volume uses Amazon's Amazon Web Service (AWS) EBS Volume and can be mounted to the pod. It is important to note that you need to create an EBS volume before you can use Awselasticblockstore.
(5) NFS: Use the shared directory provided by NFS (network File System) to mount to the pod. An NFS system in a sub-branch is required in the system.
(6) iSCSI: Mount to pod using a directory on an iSCSI storage device.
(7) Glusterfs: Use the directory of the open source Blusterfs network File system to mount to the pod.
(8) RBD: Mount to pod using Linux block device shared storage (Rados block devices).
(9) Gitrepo: Mount an empty directory and clone a git repository from the GIT library for use by the pod.
Secret: A secret volume is used to provide encrypted information to the pod, you can mount secret defined in kubernetes directly as a file for pod access. Secret volume is implemented through TMFS (memory file system), so this type of volume is always not persisted.
(one) Persistentvolumeclaim: For required space from PV (Persistentvolume), PV is typically a networked storage such as Gcepersistentdisk, Awselasticblockstore, NFS , iSCSI, and more.

Namespace (namespace)

Namespace (namespace) is another very important concept in the kubernetes system, by assigning objects inside the system to different namespace to form different items, groups, or groups of users logically grouped, It allows different groupings to be managed separately while sharing the resources of the entire cluster.
After the Kubernetes cluster is started, a namespace named "Default" is created. Next, if namespace is not specifically specified, the user-created pod, RC, and service will be created by the system into the namespace named "Default".
By using namespace to organize the various objects of kubernetes, you can implement grouping of users, that is, "multi-tenancy" management. For different rental can also be a separate resource quota equipment and management, making the whole cluster configuration is very flexible and convenient.

Annotation (note)

Annotation is similar to label and is defined in the form of a Key/value key-value pair. Label has a strict full name rule, which defines the metadata for the Kubernetes object (metadata) and is used for label selector. Annotation is a user-defined "additional" information that can be found in an external tool.
The information recorded with annotation includes:
Build information, release information, Docker image information, such as timestamp, Release ID number, PR number, image hash value, docker controller address, etc.

Typical process

To create a pod, for example, the kubernetes typical process is as follows:

Three. Component Replication Controller

To differentiate between the replication controller (replica controller) and the resource object replication controller in Controller manager, we abbreviated the resource object to RC and replication The controller refers specifically to "replica controller".
The core role of the Replication controller is to ensure that the pod associated with an RC in any time cluster maintains a certain number of pod copies in a healthy state. If there are too many copies of the pod for the pod, the replication controller destroys some of the pod copies, whereas the replication controller adds a copy of the pod until the number of pod copies of that pod reaches the preset number of copies. It is best not to exceed the RC to create the pod directly, because the replication controller will manage the pod copy through RC, which automatically creates, complements, replaces, and deletes pod copies, which can increase the disaster tolerance of the system and reduce the loss caused by unexpected conditions such as node crashes. Even if the application uses only one pod copy, it is strongly constructed to use RC to define the pod.

The Replication controller manages the pod, so its operation is closely related to pod status and restart strategy.
Common usage patterns for replica controllers:
(1) Reschedule: Regardless of whether you want to run 1 replicas or 1000 copies, the replica controller ensures that a copy of the specified number of pods is present in the cluster, and if unexpected conditions such as node failure or replica termination run, it will be re-dispatched until the expected copy is up and running.
(2) Elastic scaling: Manually or through the automatic expansion agent to modify the Spec.replicas property value of the replica controller, it is very easy to expand or reduce the number of replicas.
(3) Rolling update: The replica controller is designed to assist in the rollover of the service by replacing the pods one by one. The recommended way is to create a new RC with only one copy, if the new RC copy number plus 1, then the old RC copy number minus 1, until the old RC copy number is zero, and then delete the old RC.
In the discussion of rolling updates, we found that there may be multiple versions of release when an app is rolling updates. In fact, it is normal to have multiple release versions of a published application in a production environment. With the RC Tag Selector, we can easily track the multi-version release of an application.

Node Controller

Node Controller is responsible for discovering, managing, and monitoring individual node nodes in the cluster. Kubelet registers node information with API server at startup and periodically sends node information to API server. After the API server receives this information, it writes the information to ETCD. The node information stored in ETCD includes node health status, node resource, node name, node address information, operating system version, Docker version, Kubelet version, and so on. The node health state contains three kinds of ready (true), not ready (false), and unknown (unknown).

(1) If the Controller Manager sets the-CLUSTER-CIDR parameter at startup, A CIDR address is generated for each node that does not have a SPEC.PODCIDR set, and the SPEC.PODCIDR attribute of the node is set with that CIDR to prevent the CIDR addresses of different nodes from conflicting.
(2) Read the information of the node one by one, try to modify the node state information in Nodestatusmap, and compare the node information with node Controller's nodestatusmap. If the decision does not receive the node information sent by Kubelet, the node information sent by the node kubelet the first time, or the node state becomes non-"healthy" state in the process, then the state information of the node is saved in Nodestatusmap, and with node The system time of the controller's node is the time of the detection time and node state change.
If you determine that you have not received state information for a node for a certain period of time, set the node status to "Unknown (unknown)" and save the node state through API server.
(3) Read the node information one by one, if the node status becomes non-ready state, the node is added to the queue to be deleted, otherwise the node is removed from the queue. If the node state is not in the ready state and the system specifies cloud Provider, node controller calls Cloud Provider to view the node, and if the node fails, it deletes the nodes in Etcd. and deletes information about resources such as pods associated with that node.

Resourcequota Controller

As the management platform of container cluster, Kubernetes also provides the advanced function of resource quota management, and the resource quota management ensures that the specified object will not occupy the system resources at any time, avoiding the malfunction of the whole system due to the design or implementation of some business processes, or even unplanned outage. It plays a very important role in the smooth operation and stability of the whole cluster.
Currently kubernetes supports three levels of resource quota management:
(1) Container level, which can be managed for CPU and memory resource quotas.
(2) Pod level, which limits the available resources for all containers within the pod.
(3) Namespace level, resource limits for namespace (available for multitenant) levels, including: Number of pods, number of replication controllers, number of service, number of Resourcequota, number of secret, The number of PV (persistent volume) that can be held.
Kubernetes quota management is achieved through the access mechanism (admission Control), the two access controllers associated with quotas are Limitranger and Resourequota, The Limitranger acts on the pod and the container, and the Resourcequota acts on the namespace. In addition, if a resource quota is defined, scheduler also takes this into account during pod scheduling to ensure that pod scheduling does not exceed the quota limit.
A typical resource control process is as follows:

Namaspace Controller

The user can create a new namespace through the API server and save it in Etcd, namespace the controller periodically reads the namespace information through the API server. If namespace is identified by the API as graceful Delete (setting the deletion period, the Deletiontimestamp property is set), the namespace status is set to "terminating" and saved to ETCD. At the same time namespace controller deletes the ServiceAccount, RC, Pod, Secret, Persistentvolume, ListRange, namespace under the Resource objects such as Sesourcequota and event.
When the status of namespace is set to "terminating", the namespacelifecycle plug-in of the Adminssion controller prevents new resources from being created for the namespace. At the same time, after Namespace controller deletes all the resource objects in the Namespace, Namespace Controller performs a finalize operation on the Namespace. Delete the information in the namespace spec.finalizers domain.
If the namespace controller observes that namespace sets the deletion period (that is, the Deletiontimestamp property is set), Namespacer The Spec.finalizers domain value is empty, the namespace controller will remove the namespace resource from the API server.

Kubernetes Safety Control

The ServiceAccount controller and token controller are two security-related controllers. The ServiceAccount controller is created when Controller manager is started. It listens for delete events and namespace creation and modification events for service account. If there is no default service account in the namespace of the service account, then ServiceAccount Controller is the service The account's namespace creates a default ServiceAccount.
When you add "-admission_control=serviceaccount" to the launch of API server, API server creates a key and CRT itself at startup (/var/run/kubernetes/ APISERVER.CRT and Apiserver.key), and then add the parameters when you start the./kube-controller-manager service_account_privatge_key_file=/var/run/ Kubernetes/apiserver.key, when you start Kubernetes master, you will find that the system automatically creates a secret for the service account when it is created.
If the controller manager specifies a parameter of service-account-private-key-file at startup, and the file specified by this parameter contains a private key for the encoded RSA algorithm of pem-encoded, then the Controler Manager creates a token controller object.

Token Controller

The token controller object listens for the creation, modification, and deletion events of the service account and does different processing depending on the event. If the event being heard is the creation and modification of a service account event, the information for that service account is read, and if the service account does not have service account Secret (that is, access to the API Server secret), a JWT token is created for the service account with the private key mentioned earlier, and the token and root CA (if the root CA is specified at the start of the parameter) are placed in the new secret. Put the new secret in the service account and modify the contents of the service account in ETCD. If the event you hear is a Delete service account event, delete the secret associated with the service account.
The token controller object simultaneously listens for secret creation, modification, and deletion events, and does different processing depending on the event. If the event being heard is the creation and modification of the secret event, read the service account information specified by annotation in the secret and, if necessary, create a token related to the secret If the monitor hears an event that deletes the secret event, the reference relationship of the secret and the associated service account is deleted.

Service Controller&endpoint Controller

The Kubernetes service is an abstraction that defines a collection of pods, or is accessed as an access policy, sometimes referred to as a microservices.
The service in Kubernetes is a resource object, and as with all other resource objects, a new instance can be created through the post interface of the API server. In the example code below, a service named "MyServer" is created that contains a tag selector that selects all pods that contain the label "App=myapp" as the Pod collection for that service. The 80 ports of each pod in the Pod collection are mapped to 9376 ports on the local node, while Kubernetes assigns a cluster IP (that is, virtual IP) to the service.

{    “kind”: ”service”,    “apiVersion”: ”v1”,    “metadata”: {        “name”: ”MyService”    },    “spec”: {        “selector”: {            “app”: ”MyApp”        },        “ports”: [            {                “protocol”: ”TCP”,                “port”: 80,                “targetPort”: 9376            }        ]    },}
Four. Functional features service cluster access process (service discovery)

A process called "Kube-proxy" is running on each node in the Kubernetes cluster, which observes the behavior of the master node to add and remove "service" and "endpoint", as shown in 1th step.
Kube-proxy Open a port (randomly selected) on the local host for each service. Any connection that accesses the port is proxied to the appropriate backend pod. Kube-proxy determines which background pod is selected, as shown in the 2nd step, based on the round robin algorithm and the session adhesion (sessionaffinity) of the service.
Finally, as shown in step 3rd, Kube-proxy installs the appropriate rules in the native iptables that enable Iptables to redirect the captured traffic to the random ports mentioned earlier. The port traffic is then kube-proxy to the corresponding back-end pod.
After the service is created, the service endpoint model creates a list of IP and ports for the back-end pod (contained in the endpoint object), and Kube-proxy selects the service backend from the endpoint list. Nodes within the cluster can access the pod in the service backend through the virtual IP and port.
By default, Kubernetes assigns a cluster IP (or virtual IP, cluster IP) to the server, but in some cases you want to be able to specify that cluster IP yourself. In order to assign a cluster IP to the service, the user only needs to set the required IP address in the service's Spec.clusterip domain when defining the service.

Scheduler (Dispatch)

Scheduler in the entire kubernetes system to assume the "link" of the important function, "commitment" refers to it is responsible for receiving the Controller manager created a new pod, for it to arrange a landing "home"-target node; "Kai" refers to the completion of the resettlement work , the Kubelet service process on the target node takes over the successor and is responsible for the "rest" of the pod life cycle.
Specifically, the role of scheduler is to set the pod to be dispatched (the new pod created by the API, the pod created by the Controller manager to complement the replica) to a specific scheduling algorithm and scheduling policy binding (binding) to a suitable node in the cluster, and writes the binding information to the ETCD. The whole scheduling process involves three objects, namely, the Pod list to be scheduled, the node list available, and the scheduling algorithm and strategy. Simply put, it is through scheduling algorithm scheduling, for each pod in the Pod list to be scheduled from the node list to select the most appropriate node.
The Kublet on the target node then hears the pod binding event generated by the scheduler via API server, then receives the corresponding pod, downloads the image image, and launches the container.

Scheduler (Scheduling policy)

The default scheduling process currently provided by scheduler is divided into two steps:
(1) Pre-selection scheduling process, that is, traverse all the target node, filtering out the candidate nodes that meet the requirements. For this kubernetes a variety of pre-policies (XXX predicates) are built-in for users to choose from.
(2) Determine the optimal node, on the basis of the first step, using the priority strategy (XXX priorities) to calculate the points of each candidate node, the highest score wins.
The dispatch process of scheduler is implemented by "scheduling algorithm provider (algorithmprovider)" Loaded by plug-in mode. A algorithmprovider is a structure that includes a set of preselection strategies and a set of preferred policies, and the functions of registering Algorithmprovider are as follows:
Func Registeralgorithmprovider (name string, Predicatekeys, Prioritykeys util. Stringset)
It contains 3 parameters, the name string parameter is the algorithm name;
The Predicatekeys parameter is the set of preselection policies used by the algorithm collection
The Prioritykeys parameter is a set of preferred policies used by the algorithm
The preselection strategy available in scheduler consists of 7, each node only through Podfitsports, Podfitsresources, Nodiskconflict, Podselectormatches, Podfitshost 5 Default pre-policies to be initially selected to enter the next process.
Each node through the optimization strategy will calculate a score, calculate each score, and finally select the highest score value node as a preferred result (also the result of scheduling algorithm). leastrequestedpriority, select the minimum resource consumption node:
(1) Calculates the CPU consumption of pods and alternate pods running on all alternate nodes totalmillicpu
(2) Calculate the memory consumption of pods and alternate pods running on all alternate nodes totalmomory
(3) Calculate the score of each node, the calculation rules are roughly as follows:
Score=int ((NODECPUCAPACITY-TOTALMILLICPU)/nodecpucapacity+ ((nodememorycapacity-totalmemory) 10/ nodememorycapacity)/2)
Calculatenodelabelpriority, score according to Checknodelabelpresence strategy
Balancedresourceallocation, select the resource using the most balanced node
(1) Calculates the CPU consumption of pods and alternate pods running on all alternate nodes totalmillicpu
(2) Calculate the memory consumption of pods and alternate pods running on all alternate nodes totalmomory
(3) Calculate the score of each node, the calculation rules are roughly as follows:
Score=int (10-math.abs (totalmillicpu/nodecpucapacity-totalmemory/nodememorycapacity) * 10)

Node Management

Node management includes the registration of nodes, status escalation, pod management, container Health check, resource monitoring and other parts.

Node Registration

In a kubernetes cluster, a Kubelet service process is started on each node. This process is used to process the tasks that the master node sends to this node, managing the pods and containers in the pod. Each kubelet process registers the node's own information on the API server, periodically reports node resource usage to the master node, and monitors container and node resources through Cadvisor.
The node determines whether to register itself with the API server by setting the Kubelet's startup parameter "-register-node". If this argument is true, then Kubelet will try to register itself with the API server. As a self-registration, Kubelet startup also contains the following parameters:
--api-servers, tell Kubelet the location of the API server;
--kubeconfig, tell Kubelet where to find the certificate used to access the API server;
--cloud-provider, tell Kubelet how to read metadata about itself from a cloud service provider (IAAS).

Status escalation

Kubelet registers the node with API server at startup and periodically sends the node new messages to the API server, which is written to ETCD after the API server receives the information. The default is 10 seconds for the Kubelet to report the node State to the API server by setting the startup parameter "-node-status-update-frequency" for Kubelet.

Pod Management

Kubelet gets the list of pods to run on its own node in the following ways:
(1) File: Kubelet startup parameter "--config" specifies the file under the profile directory. The time interval of the file directory is checked by-file-check-frequency setting, which defaults to 20 seconds.
(2) HTTP endpoint (URL): set by the "-manifest-url" parameter. The data interval for the HTTP endpoint is checked through the-http-check-frequency setting, which defaults to 20 seconds.
(3) API Server:kubelet monitors the ETCD directory via API Server and synchronizes pod listings.
All pods created in non-API server mode are called static pods. Kubelet reports the state of the static pod to the API Server,api Server to create a mirror pod for the static pod and matches it. The state of the Mirror pod will truly reflect the state of the static pod. When the static pod is deleted, the corresponding mirror pod is also deleted. Kubelet uses Watch+list to listen to the/registry/node/< current node name > and/registry/pods directory through the API Server client, synchronizing the obtained information to the local cache.
Kubelet Monitor ETCD, all operation for Pod will be heard by Kubelet. If a new pod is found that is bound to this node, the pod is created as required by the pod manifest. If a local pod is found to be modified, Kubelet will make the appropriate modifications, such as deleting a container in the pod and removing the container from the Docker client. If you find a pod that deletes this node, delete the pod and remove the container from the pod through the Docker client.
Kubelet read the information that is heard by the monitor, if you are creating and modifying the pod task, do the following:
(1) Create a data directory for the pod.
(2) Read the pod manifest from API server.
(3) Mount the external volume (Extenal Volume) for the pod.
(4) Download the secret used by the pod.
(5) Check the pod that is already running in the node, and if the pod does not have a container or pause container does not start, stop all container processes in the pod first. If there are containers in the pod that need to be removed, remove the containers.
(6) Use the "kubernetes/pause" image to create a container for each pod that takes over the network of all other containers in the pod.
(7) Do the following with each container in the pod:
Calculates a hash value for the container and then uses the container's name to go to Docker to query the hash value of the corresponding container. If the container is found and the hash value is different, the container process in Docker is stopped and the pause container process associated with it is stopped;
If the container is aborted and the container does not have a specified Restartpolicy (restart policy), no processing is done.
Call the Docker client to download the container image and call the Docker client to run the container.

Container Health Check

The pod examines the health of the container through two types of probes. One is the Livenessprobe probe, which is used to determine if the container is healthy and tells Kubelet when a container is in an unhealthy state. If the Livenessprobe probe detects that the container is unhealthy, the kubelet will delete the container and do the appropriate processing based on the container's restart policy. If a container does not contain a livenessprobe probe, then Kubelet considers that the value returned by the Livenessprobe probe of the container is always "success". The other is the Readinessprobe probe, which is used to determine if the container is starting to complete, and ready to receive the request. If the Readinessprobe probe detects a failure, the status of the pod is modified. The Endpoint controller will remove the Endpoint entry from the service's Endpoint that contains the IP address of the pod where the container resides.

Reference
"Kubernetes authoritative Guide"
http://kubernetes.io/

Introduction of kubernetes container arrangement system

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.