Kubernetes restart pod
1. The Pod configuration file contents and annotations in YAML formatBefore we dive into the pod, we'll start by understanding the overall file content and function annotations of the Yaml pod. As follows: restart pod kubernetes
# YAML-formatted Pod definition file full content: Apiversion:v1 #必选, version number, kubernetes why pod restart such as V1kind:pod #必选, Podmetadata: #必选, meta-data name:string
#必选, the pod name namespace:string #必选, the namespace to which the pod belongs labels: #自定义标签-name:string #自定义标签名字 Annotations: #自定义注释列表-name:stringspec: #必选, detailed definition of container in pod containers: #必选, container list in pod-name:string #必选, restart pod kubernetes dashboard container name Image:string #必选, the mirror name of the container imagepullpolicy: [Always | Never | Ifnotpresent] #获取镜像的策略 Alawys means that the download image ifnotpresent a preference for local mirroring, otherwise the image is downloaded, nerver indicates that only the local mirror command is used: [string] #容器的启动命令列表, if not referred to The startup command used when using the package args: [string] #容器的启动命令参数列表 workingdir:string #容器的工作目录 volumemounts: #挂载到容器内部的存储卷配置
Kubernetes restart pod manually
-Name:string #引用pod定义的共享存储卷的名称,kubernetes restart deployment pods volumes[] Part of the volume name defined mountpath:string #存储卷在容器内mount的绝对路径, should be less than 512 characters Readonly:boolean #是否为只读模式 ports: #需要暴露的端口库号列表-name:string #端口号名称 containerport:int #容器需 The port number to listen to is Hostport:int #容器所在主机需要监听的端口号, the default is the same as container protocol:String #端口协议, TCP and UDP supported, default TCP env: #容器运行前需设置的环境变量列表-name:string #环境变量名称 value:string #环 The value of the environment variable resources: #资源限制和请求的设置 limits: #资源限制的设置 cpu:string #Cpu的限制, in core number, will be used for Docker run- -cpu-shares parameter memory:string #内存限制, unit can be Mib/gib, will be used for Docker run--memory parameter requests: #资源请求的设置 Cpu:string #Cpu请求, the initial available quantity for the container to start memory:string #内存清楚, the initial available quantity for the container to start Livenessprobe: #对Pod内个容器健康检查的设置, when probing Test no response a few times will automatically restart the container, check methods have exec, HttpGet and Tcpsocket, for a container just set one of the methods can be exec: #对Pod容器内检查方式设置为exec方式 command: [stri NG] #exec方式需要制定的命令或脚本 httpget: #对Pod内个容器健康检查方法设置为HttpGet, you need to develop path, Port path:string port:numb Er host:string scheme:string httpheaders:-name:string value:string Tcpsock ET: #对Pod内个容器健康检查方式设置为tcpSocket方式 port:number initialdelayseconds:0 #容器启动完成后首次探测的时间, unit seconds time outseconds:0 #对容器健康检Check the time-out of waiting for response, unit seconds, default 1 seconds periodseconds:0 #对容器监控检查的定期探测时间设置, unit seconds, default 10 seconds successthreshold:0 Failurethre shold:0 securityContext:privileged:false restartpolicy: [Always | Never | OnFailure] #Pod的重启策略, always means that once the operation is terminated in any way, Kubelet will be restarted, onfailure indicates that only the pod is restarted with a non-0 exit code, Nerver indicates that the pod is no longer restarted nodeselector : Obeject #设置NodeSelector表示将该Pod调度到包含这个label的node上, specify Imagepullsecrets in key:value format: #Pull镜像时使用的secret名称 to Key:sec The Retkey format specifies-name:string Hostnetwork:false #是否使用主机网络模式, false by default, and if set to True, indicates the use of Host network volumes: #在该pod On the definition shared storage Volume List-name:string #共享存储卷名称 (there are many types of volumes) Emptydir: {} #类型为emtyDir的存储卷, a temporary directory with the pod's life cycle. A null-value hostpath:string #类型为hostPath的存储卷 that represents the directory path:string the host where the pod is mounted, and will be used for the same time Mount directory #Pod所在宿主机的目录. Secret: #类型为secret的存储卷, mount the cluster with the defined Secre object to the inside of the container scretname:string items:-key:string Path:string Configmap: #类型为configMap的存储卷, mount pre-defined COnfigmap object to Inside container name:string items:-key:string path:string
2, Pod basic usage:When using Docker, we can use the Docker Run command to create and start a container, whereas the long-running container requirement in the Kubernetes system is that its main program needs to be running in the foreground all the time. If the start command for the Docker image that we created is a background execution program, such as a Linux script: Nohup./startup.sh & Kubelet After you have created the pod that contains the container, you run the command, which means that the pod execution is finished, A new pod is then produced based on the number of replicas copies of the pod defined in RC, and once a new pod is created, it is trapped in an infinite loop after executing the command. That's why kubernetes needs us to create a Docker image with a foreground command as the starting command. For applications that cannot be modified for the foreground, you can also use the Open Source Tool supervisor to assist with the functionality of the foreground operation.****pod can be composed of one or more containersFor example: The front end of the two container application frontend and Redis are tightly coupled relationships that should be combined into a single overall service, then these two should be packaged as one pod. The configuration file Frontend-localredis-pod.yaml is as follows:
ApiVersion:v1kind:Podmetadata: name:redis-php Label: name:redis-phpspec: containers: -Name : frontend Image:kubeguide/guestbook-php-frontend:localredis ports: -containersport:80 -Name: redis-php image:kubeguide/redis-master ports: -containersport:6379
Multiple container applications belonging to one pod can communicate with each other only by using localhost, and the set of containers is bound to an environment. After creating the pod with Kubectl create, the Get pod information can be seen as:
#kubectl get Godsname Ready STATUS restats ageredis-php 2/2 Running 0 10m
You can see that the ready information is 2/2, indicating that the two containers in the pod are running successfully.
To see the details of the pod, you can see the definition and creation process for two containers.
[Email protected] ~]# kubectl describe redis-phpthe server doesn ' t have a resource type ' redis-php ' [[email protected] ~]# Kubectl describe pod Redis-phpname:redis-phpnamespace:defaultnode:kubernetes-minion/10.0.0.23start time:wed, 2 APR 017 09:14:58 +0800labels:name=redis-phpstatus:runningip:10.1.24.2controllers: <none>containers:nginx: Container id:docker://d05b743c200dff7cf3b60b7373a45666be2ebb48b7b8b31ce0ece9be4546ce77image:nginximage ID: Docker-pullable://docker.io/[email protected]: e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582port:80/tcpstate:runningstarted:wed, APR 2017 09:19:31 +0800
3. Static podStatic pods are managed by kubelet only on pods that exist on a specific node, they cannot be managed by API server and cannot be associated with replicationcontroller, deployment, or Daemonset. And Kubelet could not perform a health check on them. Static pods are always created by Kubelet and always run on the node where Kubelet resides. There are two ways to create a static pod: Configuration file or HTTP Mode 1) configuration file First, you need to set the Kubelet startup parameter "--config", specify the directory where the kubelet needs to be monitored, kubelet will periodically scan the directory, the ice according to the directory The. Yaml or. json file is created assuming that the configuration directory is configured with a startup parameter of/etc/kubelet.d/:--config=/etc/kubelet.d/, and then restarts the Kubelet service, the host hosts the Docker PS or on Kubernetes master, you can see that the specified container is in the list because the static pod cannot be managed directly through the API server, the master node attempts to delete the pod, it becomes pending state, and it is not deleted
#kubetctl Delete pod static-web-node1pod "static-web-node1" Deleted#kubectl get podsname ready STATUS restarts agestatic- web-node1 0/1 Pending 0 1s
The operation to delete the pod can only be performed on the node on which it resides, removing its defined. yaml file from the/etc/kubelet.d/directory.
#rm-F/etc/kubelet.d/static-web.yaml#docker PS
4, Pod container sharing volumeVolume types include: Emtydir, Hostpath, Gcepersistentdisk, Awselasticblockstore, Gitrepo, Secret, NFS, SCSI, Glusterfs, Persistentvolumeclaim, RBD, Flexvolume, Cinder, CEPHFS, Flocker, Downwardapi, FC, Azurefile, Configmap, Vspherevolume, etc., Multiple volume can be defined, and the name of each volume remains unique. Multiple containers in the same pod can share pod-level storage volume volume. Volume can be defined as various types, multiple containers are mounted on their own, and a volume is mounted as a required directory within the container. Such as:
As in the pod contains two containers: Tomcat and BusyBox, set volume "App-logs" at the pod level, for Tomcat to write the log file, BusyBox read the log file. The configuration file is as follows:
ApiVersion:v1kind:Podmetadata: name:redis-php Label: name:volume-podspec: containers: - Name:tomcat image:tomcat ports: -containersport:8080 volumemounts: -Name:app-logs Mountpath:/usr/local/tomcat/logs -name:busybox image:busybox command: ["sh", "-C", "Tail-f/logs/ Catalina*.log "] volumes: -name:app-logs emptydir:{}
BusyBox container can view output through Kubectl logs
#kubectl logs Volume-pod-c BusyBox
The log files generated by the Tomcat container can be logged in the container to view
#kubectl Exec-ti volume-pod-c Tomcat--Ls/usr/local/tomcat/logs
5.Pod Configuration Management .... 6.Pod lifecycle and restart Strategy pod is defined as a variety of states throughout the life cycle and is familiar with the various states of the pod to help understand how to set up the pod's scheduling policy, restart policy pod status contains the following types of The Pod restart policy (restartpolicy) is applied to all containers within the pod and is determined and restarted only by Kubelet on the node where the pod is located. When an abnormal exit or health check Shi Ba division, Kubelet will be based on the settings of the Restartpolicy pod restart strategy including always, OnFailure and Nerver, the default value is always. Kubelet the time interval for restarting the failed container is calculated by multiplying the sync-frequency by 2n, such as 1, 2, 4, 8 times, and so on, for a maximum delay of 5 minutes, and resetting the event after 10 minutes after a successful restart. Pod restart policy and control methods are closely related, currently can be used to manage pod of the controller Treasury Replicationcontroller, Job, Daemonset and directly through Kubelet management (static pod), each controller on the pod restart policy requirements are as follows:
RC and Daemonset: Must be set to always, need to ensure that the container continues to run
Job:onfailure or nerver to ensure that the container does not restart after completion
Kubelet: Restarts the pod when it fails, regardless of the value restartpolicy set, and does not perform health checks on the pod
7. Pod Health CheckHealth checks for pods can be examined by two types of probes: Livenessprobe and Readinessprobe
Livenessprobe probe: Used to determine if the container is alive (running state), if the Livenessprobe probe detects that the container is unhealthy, the kubelet kills the container and responds to the container's restart strategy.
Readinessprobe probe: Used to determine if the container is starting to complete (ready state) and can accept the request. If the Readinessprobe probe fails, the status of the pod is modified. The Endpoint controller removes the Endpoint from the service's Endpoint that contains the pod where the container resides.
Kubelet Custom performs livenessprobe probes to diagnose the health of the container. Livenessprobe There are three ways of doing things. (1) Execaction: Executes a command inside the container, if the return value of the command is 0, represents the container health example:
ApiVersion:v1kind:Podmetadata: name:liveness-exec Label: name:livenessspec: containers: - Name:tomcat image:grc.io/google_containers/tomcat args: -/bin/sh -C -echo OK >/ Tmp.health;sleep 10; Rm-fr/tmp/health;sleep livenessprobe: exec: command: -Cat -/tmp/health Initiandelayseconds:15 timeoutseconds:1
(2) Tcpsocketaction: TCP checks are performed via the container IP address and port number, if a TCP connection is established to indicate a container health example:
Kind:podmetadata: name:pod-with-healthcheckspec: containers: -Name:nginx Image:nginx Livenessprobe: tcpsocket: port:80 initiandelayseconds:30 timeoutseconds:1
(3) Httpgetaction: Call the HTTP GET method via the container IP address, port number, and path, if the state of the response is greater than 200 and less than 400, the container health example is considered:
ApiVersion:v1kind:Podmetadata: name:pod-with-healthcheckspec: containers: -Name:nginx image : Nginx livenessprobe: httpget: path:/_status/healthz port:80 initiandelayseconds:30 timeoutseconds:1
For each probe mode, you need to set the Initialdelayseconds and timeoutseconds two parameters, which have the following meanings:
Initialdelayseconds: The wait time for the first monitoring check after starting the container, in seconds
Timeouseconds: The time-out, in seconds, that the health check waits for a response after sending a request. When a timeout occurs, the container is considered unavailable for service, and the container is restarted
8. Play Pod SchedulerIn the kubernetes system, POD is only the carrier of container in most scenes, and it usually needs to complete the scheduling and automatic control function of pod through RC, Deployment, Daemonset, job and other objects. 8.1 RC, Deployment: One of the main functions of the automatic scheduling RC is to automatically deploy multiple copies of the container application, and continuously monitor the number of replicas, always maintaining the number of user-specified replicas within the cluster. In the scheduling strategy, in addition to using the system built-in scheduling algorithm to select the appropriate node for scheduling, you can also use Nodeselector or nodeaffinity in the definition of pod to specify node to be satisfied with the conditions of the dispatch. 1) Nodeselector: Directed scheduling Kubernetes Master Scheduler service (Kube-scheduler process) is responsible for the implementation of POD scheduling, the entire process through a series of complex algorithms, Finally, the optimal target node is calculated for each pod, and we often do not know which node the pod will eventually be dispatched to. In fact, we need to dispatch the pod to the nodes we specify, which can be achieved by matching node's tag with the pod's Nodeselector attribute. (1) First label the target node with the KUBECTL label Command kubectl label nodes <node-name> <label-key>=<label-value> Example:
#kubectllabel nodes k8s-node-1 Zonenorth
(2) Then add the Nodeselector example to the pod definition:
ApiVersion:v1kind:Podmetadata: name:redis-master Label: name:redis-masterspec: replicas:1 Selector: name:redis-master Template: metadata: Labels: name:redis-master Spec: Containers: -name:redis-master images:kubeguide/redis-master ports: -containerport:6379 nodeselector: Zone:north
Running the Kubectl create-f command to create the Pod,scheduler will dispatch the pod to node with the Zone=north tag. If more than one node owns the tag, a pod schedule is available on the group node based on the scheduling algorithm.It is important to note that if there is no node in the cluster that owns the tag, the pod cannot be dispatched successfully. 2) Nodeaffinity: The scheduling strategy of affinity scheduling is a new generation of scheduling strategy to replace Nodeselector in the future. Because Nodeselector is precisely matched through node's label, all nodeaffinity add in, Notin, Exists, Doesnotexist, Gt, LT and other operators to select node. Scheduling side Dew is more flexible. 8.2 Daemonset: A specific scenario scheduler Daemonset is used to manage only one copy instance of the pod running on each node in the cluster,
This usage is suitable for applications with the following requirements:
Run a daemon process with glusterfs storage or ceph storage on each node
Run a log capture program on each node, such as Fluentd or Logstach
Run a health program on each node to capture node's performance data.
Daemonset pod scheduling policy is similar to RC, in addition to using the system built-in algorithm on each node scheduling, you can also use Nodeselector or nodeaffinity in the definition of pod to specify the node range to meet the conditions to dispatch. 8.3 Batch processing scheduling9.Pod expansion and reduction of gloryIn a real-world production environment, we often encounter scenarios where a service needs to be scaled up, and it is possible to reduce the number of service instances because of the resource's exact need for resource reduction, at which point we can provide the scale mechanism in kubernetes to do so. Take Redis-slave RC As an example, the number of initial replicas defined is 2, and the number of pod copies can be re-adjusted by the KUBECTL scale command
#kubectl scale RC Redis-slave--replicas=3replicationcontroller "Redis-slave" Scaled#kubectl get Podsname Ready STATUS RE Starts ageredis-slave-1sf23 1/1 Running 0 1hredis-slave-54wfk 1/1 Running 0 1hredis-slave-3da5y 1/1 Running 0 1h
In addition to the ability to manually complete pod expansion and scaling operations via the KUBECTL Scale command, the new version adds a new horizontal Podautoscaler (HPA) controller to enable the ability to start pod expansion and contraction based on the CPU usage path. The controller is based on the Mastger Kube-controller-manager service start parameter--horizontal-pod-autoscler-sync-period defined length (default 30 seconds), Periodically monitors the CPU usage of the target pod and adjusts the number of pod copies in Replicationcontroller or deployment to match the user-defined average pod CPU usage, pod CPU usage is derived from the Heapster component, so heapster must be pre-installed.Continue to update in ...... .....
Kubernetes in-depth knowledge of pods