Pod of kubernetes

Source: Internet
Author: User




Kubernetes restart pod



Pods are the smallest deployable units that can be used to create and manage kubernetes calculations. Kubernetes pause pod.  A pod represents a process that runs in a cluster.

Kubernetes pause

Restart pod kubernetes 

Pods are like pea pods, which consist of one or more containers (such as Docker containers) that share container storage, network, kubernetes restart pod manually  and container run configuration items. The containers in the pod are always dispatched at the same time and have a common operating environment. Restart pod kubernetes dashboard You can think of a single pod as a "logical host" running a standalone application-one or more tightly coupled application containers that run on several identical physical or virtual machines before they have containers.



Pause pod kubernetes

Kubernetes why pod restart



Although Kubernetes supports multiple container runtimes, Docker is still the most common runtime environment, and we can use Docker terminology and rules to define pods.


Kubernetes restart deployment pods


The shared environment in the pod includes Linux namespace,cgroup and other possible isolated environments, which are consistent with the Docker container. In a pod environment, there may be a smaller sub-isolated environment in each container.





The containers in the pod share the IP address and port number, which can belocalhostdiscovered by each other. They can be communicated through interprocess communication, such as SYSTEMV signals or POSIX shared memory. Containers between different pods have different IP addresses and cannot communicate directly through the IPC.





The containers in the pod also have access to the shared volume, which are defined as part of the pod and mounted to the file system of the application container.





Just like each application container, the pod is considered a temporary entity. During the pod's life cycle, when the pod is created, it is assigned a unique ID (UID), dispatched to the node, and consistently maintains the desired state until it is terminated (depending on the restart policy) or deleted. If node dies, it is assigned to the pod on node, which is dispatched to the other node nodes after a timeout period. A given pod (as defined by the UID) will not be "re-dispatched" to a new node, but instead be replaced by an identical pod, which, if desired, may even be the same name, but there will be a new UID (see Replication Controller for details).



How to manage multiple containers in pod


A pod can run multiple processes concurrently (running as a container) to work together. Containers in the same pod are automatically assigned to the same node. Containers in the same pod share resources, network environments, and dependencies, and they are always dispatched at the same time.





Note Running multiple containers at the same time in one pod is a more advanced usage. Consider this pattern only when your container needs to work closely together. For example, you have a container that runs as a Web server, requires a shared volume, and has another "sidecar" container to get resources from the remote to update these files.

















Two kinds of resources can be shared in pods



Each pod in the network will be assigned a unique IP address. All containers in the pod share network space, including IP addresses and ports. Containers inside the pod canlocalhostcommunicate with each other. When a container in the pod communicates with the outside world, a shared network resource (such as a port mapping using a host) must be assigned.
Storage can specify multiple shared volume for pod. All containers in the pod have access to the shared volume. Volume can also be used to persist storage resources in the pod to prevent file loss after a container restart.
Using Pods


I usually divide the pod into two categories:



Autonomous pod The pod itself is not self-repairing, and when the pod is created (whether it is created directly by you or by another controller), it is dispatched to the cluster node by kuberentes. The pod will remain on that node until the pod's process is terminated, deleted, evicted due to lack of resources, or node fails. Pods do not heal. If the pod is running node failure, or if the scheduler itself fails, the pod will be deleted. Similarly, pods are evicted if the pod is in a node that lacks resources or if the pod is in a maintained state.
The controller-managed pod Kubernetes uses a more advanced abstraction layer called Controller to manage pod instances. Controllers can create and manage multiple pods, providing replica management, rolling upgrades, and cluster-level self-healing capabilities. For example, if a node fails, the controller can automatically dispatch pods on that node to other healthy nodes. Although you can use the pod directly, the controller is usually used to manage the pod in kubernetes.








As shown in the composition of the pod, we see that each pod has a special pause container called the "root container". The mirror corresponding to the Pause container is part of the Kubernetes platform, with the exception of the Pause container, where each pod contains one or more tightly related user business containers.



Termination of Pod


Because the pod is a process that runs on a cluster node, it is necessary to gracefully terminate it when it is no longer needed (rather than using the brute force of sending a kill signal). Users need to be able to relax delete requests and know when they will be terminated and deleted correctly. When the user wants to terminate the program and send a request to delete the pod, there will be a grace period before the pod can be forcibly deleted, sending a term request to the main process for each container. Once timed out, the kill signal is sent to the main process and removed from the API server. If Kubelet or container manager restarts while waiting for the process to terminate, the full grace period will still be retried after the reboot.





The sample process is as follows:



The user sends a command to delete the pod, and the default grace period is 30 seconds;
After the pod exceeds the grace period, API server updates the pod status to "dead";
The pod status displayed on the client command line is "terminating";
and the third step. At the same time, when Kubelet discovers that the pod is marked as "terminating", it starts to stop the pod process: 1 If the prestop hook is defined in the pod, it is called before the pod is stopped. If the Prestop hook is still running after the grace period, the second step increases the grace period by 2 seconds, and 2 sends the term signal to the process in the pod;
With the third step, the pod will be removed from the endpoint list of the service and is no longer part of the replication controller. The slow-off pod will continue to handle the traffic that the load balancer forwards;
After the grace period, the process will be killed by sending a sigkill signal to processes that are still running in the pod.
Kublete will complete the removal of the pod in API server by setting the grace period to 0 (delete now). The pod disappears in the API and is not visible on the client side.


Deleting the grace period defaults to 30 seconds.kubectl deleteCommand Support--grace-period=<seconds>option to allow users to set their own grace period. If set to 0, the pod is forced to be removed. In the kubectl>=1.5 version of the command, you must use both--forceand--grace-period=0to force the deletion of the pod.



Pause container


At the beginning of the contact Kubernetes, it is known that the cluster needs to download agcr.io/google_containers/pause-amd64:3.0mirror, and then each time a container is started, a pause container will be launched.





But what is the function of thispausecontainer, how it is done, and why it is accompanied by the start of the container and so on. These questions have been in my heart, now the fate of learning relevant content.



189fbd12e903 Rancher/rancher-agent:v2.0.6 "run.sh--share-r ..." TenDays ago Exited (0)TenDays ago Share-Mnt[[email protected]-master ~]# DockerPS-A |greppause-Amd64f30cc4df0eff Rancher/PAUSE-AMD64:3.1 "/pause" 4Days ago Up4Days k8s_pod_confserver-bdf79c8cb-xxf82_confserver_f92c3ecc-a11b-11e8-a1c4-005056936694_0af651c01f1e4 Rancher/PAUSE-AMD64:3.1 "/pause" 5Days ago Up5Days k8s_pod_jenkins-5cf89c84f6-h4hs6_jenkins_954201e0-a057-11e8-a1c4-005056936694_07AB1920551CA Rancher/PAUSE-AMD64:3.1 "/pause" TenDays ago UpTenDays k8s_pod_nfs-provisioner-2cjpp_nfs-provisioner_a24395b7-9c42-11e8-a1c4-005056936694_04f89f1c2e83b Rancher/PAUSE-AMD64:3.1 "/pause" TenDays ago UpTenDays k8s_pod_cattle-node-agent-s8s75_cattle-system_a2443a27-9c42-11e8-a1c4-005056936694_074ff9a7eb776 Rancher/PAUSE-AMD64:3.1 "/pause" TenDays ago UpTenDays k8s_pod_nginx-ingress-controller-gl7k6_ingress-nginx_a239056d-9c42-11e8-a1c4-005056936694_076A3177F05EC Rancher/PAUSE-AMD64:3.1 "/pause" TenDays ago UpTenDays K8s_pod_calico-node-7vzlj_kube-system_a2391472-9c42-11e8-a1c4-005056936694_0


The Pause container in Kubernetes provides the following features for each business container:



Serving as the basis for sharing Linux namespaces in pods;
Enable the PID namespace to open the Init process.
Pod life cycle pod phase


The Pod is defined in thestatusinformation store in Podstatus, which has aphasefield.





The pod's phase (phase) is a simple, macroscopic overview of the pod in its life cycle. This phase is not a comprehensive summary of the container or Pod, nor is it intended to be a synthetic state machine.





The number and meaning of Pod phases are strictly specified. In addition to the States listed in this document, it should not be assumed that the POD has otherphasevalues.





The following arephasethe possible values:



Value Describe
Pending The pod has been accepted by the kubernetes system, but one or more container images have not been created. This includes the time before the plan and the time it takes to download the image over the network, which may take some time
Running The pod is bound to the node and all container have been created. At least one container is still running, or is starting or restarting
Succeeded All containers in the pod have been successfully terminated and are not restarted
Failed all containers in the pod have been terminated, and at least one container has failed. In other words, container either exits a non-0 state or is terminated by the system
Unknown The status of the pod is not available for some reason, usually due to an error communicating with the pod host


Is the pod's life cycle, you can see the pod state changes.









Pod conditions


Pod has a podstatus, it has a podconditions Array, the pod has been or has not passed it. Each element of the Podcondition array has six possible fields:



ThislastProbeTimefield provides the timestamp of the last detection pod condition.

the lastTransitionTime Field provides the timestamp at which the pod finally transitions from one state to another
the message fields are human-readable messages that indicate details about the transformation
the reason field is the only, word, CamelCase reason for the last conversion of the condition
the statusThe field is a string and the possible value is " True"," False"and" Unknown"
the type A field is a string that contains the following possible values:
PodScheduled: Pod has been arranged to a node;
Read:pod can provide the request and should be added to all matching services in the load balancing pool;
Initialized: all init containers Have been successfully launched;
Unschedulable: Scheduler cannot dispatch pods now, for example due to lack of resources or other restrictions
ContainersReady: All containers in pod are ready
Container probe


A probe is a periodic diagnosis performed by a kubelet on a container. To perform a diagnostic, Kubelet invokes the Handler implemented by the container. There are three types of handlers:



execaction : Executes the specified command within the container. If the command exits with status code 0, the diagnostic is considered successful
tcpsocketaction : Performs a TCP check on the IP address of the container on the specified port. If the port is open, the diagnostics are considered successful
httpgetaction : Performs an HTTP GET request on the specified port and the IP address of the container on the path. If the status code of the response is greater than or equal to 200 and less than 400, the diagnostics are considered successful.


Each probe has one of three results:



Success: Container passed the diagnosis.
Failed: The container did not pass diagnostics.
Unknown: Diagnostics fail, so no action should be taken.


Kubelet can choose to perform two probes and react to them on the running container:



livenessProbe: Indicates whether container is running. If the survival probe fails, kubelet will kill container and container will be constrained by its restart policy . If the container does not provide a survival probe, the default state is Success。
readinessProbe: Indicates whether container is ready for service request. If the ready probe fails, the endpoint controller removes the pod's IP address from the endpoints of all services that match the pod. The default readiness state before the initial delay is Failure . If the container does not provide a ready state probe, the default state is Success.
when should I use vitality or prepare a probe?


If the process in your container crashes itself when it encounters a problem or becomes unhealthy, you do not necessarily need to survive the probe; Kubelet willrestartPolicyperform the correct operation automatically based on the pod .





If you want to kill and restart when the container probe fails, specify a survival probe and specify restartPolicy为Always or onfailure.





If you only want to start sending traffic to the pod when the probe succeeds, specify a ready probe. In this case, the ready probe may be the same as the survival probe, but the presence of a ready probe in the spec means that the pod will start without receiving any traffic and only start receiving traffic if the probe probe is successful.





If container needs to process large data, configuration files, or migrations during startup, specify a ready probe.





If you want container to be able to maintain itself, you can specify a ready probe that examines a ready-to-reach endpoint that differs from the active probe.





Note that if you only want to exclude requests when the pod is deleted, you do not necessarily need a ready probe; When you delete a pod, the pod automatically places it in an incomplete state, regardless of whether a ready probe is present or not. The pod is still in an incomplete state while waiting for the container in the pod to stop.











Pod of kubernetes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.