Kubernetes's pod detailed

Source: Internet
Author: User
Tags redis rollback cpu usage tomcat k8s

1. Pod definition file
APIVERSION:V1 kind:pod metadata:name:string namaspace:string Labels:-name:string Annotations:-Name: String spec:containers:-name:string images:string imagepullpolice: [Always | Never | Ifnotpresent] Command: [string] args: [string] workingdir:string volumemounts:-name:string m

Kubernetes restart pod
      Ountpath:string Readonly:boolean Ports:-name:string Containerport:int Hostport:int
        Protocol:string env:-name:string value:string resources:limits:cpu:string Memory:string requests:cpu:string memory:string LivenessProbe:exec:command: [string] httpGet:path:string port:int host:string scheme:string Httphea
      DERs:-name:string value:string tcpSocket:port:int Initialdelayseconds:number Timeoutseconds:number periodsecOnds:number successthreshold:0 failurethreshold:0 SecurityContext:privileged:false Restartpol Icy: [Always | Never | 
    OnFailure] Nodeselector:object imagepullsecrets:-name:string hostnetwork:false volumes:-name:string
        Emptydir: {} hostPath:path:string secret:secretName:string items:-key:string Path:string configMap:name:string Items:-key:string path:string

Restart pod kubernetes
2. Basic usage of pod 2.1. NotesThe pod is actually a collection of containers, and the requirements for running the container in k8s are: The main program of the container needs to run in the foreground, not in the background. Applications can be transformed into the way the foreground runs, such as go language programs, running binaries directly; the Java language runs the main class; the Tomcat program can write a run script. or through the Supervisor Process Management tool, that is, supervisor run in the foreground, the application is run by Supervisor management in the background. Specific reference to Supervisord. When multiple applications are tightly coupled relationships, multiple applications can be placed together in one pod, and multiple containers in the same pod can be accessed through localhost (the pod can be understood as a virtual machine,kubernetes why pod restart , shared networks, and storage volumes). 2.2. Pod Related commands
Operation Command Description
Create Kubectl create-f Frontend-localredis-pod.yaml
Query pod Run status Kubectl Get pods–namespace=<namespace>
Query pod details Kebectl describe pod <POD_NAME>–namespace=<NAMESPACE> This command is often used to troubleshoot problems and to view event events
Delete Kubectl Delete pod <POD_NAME> kubectl Delete Pod–all
Update Kubectl Replace Pod.yaml
3. Static pod

Kubernetes restart pod manually
Static pod is managed by Kubelet and exists only in pod on specific node. They cannot be managed through API server, cannot be associated with replicationcontroller, deployment, or Daemonset, and Kubelet are not able to check their health.


Restart pod kubernetes dashboard 


Static pod is always created by Kubelet and always runs on the node where the Kubelet resides.

Kubernetes restart deployment pods



How to create a static pod: 3.1. By configuration file Mode





You need to set the startup parameter "–config" for the Kubelet, specify the directory where the configuration files kubelet need to be monitored, Kubelet periodically scan the directory and create operations based on the directory's. Yaml or. json file. Static pod cannot be deleted via API server (if the deletion becomes pending state), delete the pod Yaml or JSON file from this directory if you want to delete it.





For example:





Configuration directory is/etc/kubelet.d/, configure startup parameters: –config=/etc/kubelet.d/, the directory is placed in Static-web.yaml.



APIVERSION:V1
kind:pod
metadata:
  name:static-web
  Labels:
    name:static-web
Spec:
  Containers:
  -name:static-web
  image:nginx
  ports:
  -Name:web
    containerport:80

4. Pod container sharing volume


Multiple containers in the same pod can share pod-level storage volumes volume,volume can be defined as various types, multiple containers are mounted separately, and the pod's volume is mounted as the desired directory within the container.





For example: Pod-level Volume: "App-logs" for Tomcat to write log files, BusyBox read log files.











Pod-volumes-applogs.yaml



APIVERSION:V1
kind:pod
metadata:
  name:volume-pod
Spec:
  containers:
  -Name:tomcat
    Image:tomcat
    Ports:
    -containerport:8080
    volumemounts:
    -name:app-logs mountpath
      :/usr/ Local/tomcat/logs
  -name:busybox
    image:busybox
    command: ["sh", "-C", "Tailf/logs/catalina*.log"]
    volumemounts:
    -name:app-logs
      mountpath:/logs
  volumes:
  -name:app-logs Emptudir
    : {}



View Log kubectl logs <pod_name> c <container_name> kubectl exec-it <pod_name>-C <containe R_name>–tail/usr/local/tomcat/logs/catalina.xx.log 5. Configuration management of Pod





The Kubernetes v1.2 version provides a unified cluster configuration management solution –configmap. 5.1. Configmap: Configuration management for container applications





Usage Scenario: Live as an environment variable within a container. Set startup parameters for the container startup command (set to environment variable). Mount as a volume file or directory inside the container.





Configmap is stored in the form of one or more key:value in the kubernetes system, which can represent either the value of a variable (for example, Apploglevel=info) or the contents of a complete configuration file (for example: Server.xml = 5.2. Create Configmap 5.2.1. By Yaml file Way





Cm-appvars.yaml



APIVERSION:V1
kind:configmap
metadata:
  name:cm-appvars
data:
  apploglevel:info
  Appdatadir:/var/data



Common commands





Kubectl create-f Cm-appvars.yaml





Kubectl Get Configmap





Kubectl describe Configmap cm-appvars





Kubectl get Configmap cm-appvars-o yaml 5.2.2. Through the Kubectl command line method





You can specify multiple parameters on a single line by creating Configmap with Kubectl Create, using Parameters –from-file or –from-literal specifying content.





1 Create from a file by –from-file parameters, you can specify the name of the key, or you can create a configmap that contains multiple keys on a single command line.





KUBECTL Create Configmap Name–from-file=[key=]source–from-file=[key=]source





2 The –from-file parameter is created from the directory, where each profile name is set to key and the contents of the file are set to value.





KUBECTL Create Configmap Name–from-file=config-files-dir





3 Create from the text by –from-literal and directly create the specified Key=value as configmap content.





KUBECTL Create Configmap Name–from-literal=key1=value1–from-literal=key2=value2





There are two ways for container applications to use Configmap: to get content in Configmap through an environment variable. Mount the contents of the Configmap as a file or directory inside the container by volume mount. 5.2.3. By way of environment variables





Configmap's Yaml file: Cm-appvars.yaml



APIVERSION:V1
kind:configmap
metadata:
  name:cm-appvars
data:
  apploglevel:info
  Appdatadir:/var/data



Pod Yaml file: Cm-test-pod.yaml



APIVERSION:V1
kind:pod
metadata:
  name:cm-test-pod
Spec:
  containers:
  -Name:cm-test
    Image:busybox
    command: ["/bin/sh", "-C", "Env|grep APP"
    ] env:
    -name:apploglevel
      valuefrom:
        Configmapkeyref:
          name:cm-appvars
          key:apploglevel
    -name:appdatadir
      valuefrom:
        Configmapkeyref:
          name:cm-appvars
          Key:appdatadir



To create a command:





Kubectl create-f Cm-test-pod.yaml





Kubectl Get Pods–show-all





Kubectl logs Cm-test-pod 5.3. Restrictions on the use of Configmap Configmap must be created before Pod Configmap can also be defined as belonging to a namespace. Only the pod that is in the same namespace can reference it. Kubelet only supports POD usage configmap that can be managed by API server. Static pod cannot be referenced. When the pod mounts to Configmap, the container can only be mounted as "directory" and cannot be mounted as a file. 6. Pod life cycle 6.1. The state of the pod



Status Value Description
Pending API server has created the pod, but a mirror of one or more containers in the pod has not yet been created, including the mirroring download process
Running All containers within the pod have been created and at least one container is running, starting, or restarting state
Succeeded All containers within the pod are successfully exited and will not be restarted
Failed All containers within the pod have exited, but at least one container has failed to exit
Unknown Unable to get pod state for some reason, such as poor network traffic
Restart strategy for 6.2. Pod Restart
Policy Description
Always The container is automatically restarted by Kubelet when the container fails
OnFailure The container is automatically restarted by Kubelet when the container terminates and the exit code is not 0 o'clock
Never Kubelet does not reboot the container regardless of the container's state of operation


Description :





The controller that can manage the pod has replication Controller,job,daemonset, and Kubelet (static pod). RC and Daemonset: Must be set to always to ensure that the container continues to run. Job:onfailure or never to ensure that the container does not restart after it has finished executing. Kubelet: Reboot the pod when it fails, regardless of the restartpolicy setting, and does not perform a health check on the pod. 6.3. Common state transition Scenarios



number of containers for pod pod Current state events that occurred pod result status
Restartpolicy=always Restartpolicy=onfailure Restartpolicy=never
Contains a container Running Container exits successfully Running Succeeded Succeeded
Contains a container Running Container failed to exit Running Running Failure
Contains two containers Running 1 containers failed to exit Running Running Running
Contains two containers Running The vessel was killed by Oom. Running Running Failure
7. Pod Health Check


The healthy state of the pod is examined by two types of probes: Livenessprobe and Readinessprobe.





livenessprobe is used to determine whether a container survives (running state). If the Livenessprobe probe detects that the container is unhealthy, the kubelet will kill the container and handle it accordingly according to the container's restart policy. If the container does not contain a livenessprobe probe, the Kubelet considers that the return value of the probe is always "success".





The readinessprobe is used to determine whether the container has started completion (read state) and can accept the request. If the Readnessprobe probe fails, the pod state will be modified. Endpoint Controller removes the Endpoint of the pod containing the container from the Endpoint of the service.





Kubelet regularly executes livenessprobe probes to determine the state of the container's health.





livenessprobe Parameters: initialdelayseconds: Wait time, in seconds, for the first health check after starting the container. timeoutseconds: Health Check the time that a request waits for a response after it is sent, and if the timeout response Kubelet The container is unhealthy, reboot the container in seconds.





Livenessprobe three ways to implement:





1) execaction: Executes a command inside a container, indicating that the container is healthy if the command state returns a value of 0.



APIVERSION:V1
kind:pod
metadata:
  name:liveness-exec
Spec:
  containers:
  -name:liveness
    image:tomcagcr.io/google_containers/busybox
    args:
    -/bin/sh
    -C-
    echo OK >/tmp/health; Sleep 10;rm-fr/tmp/health;sleep
    livenessprobe:
      exec:
        command:
        -Cat
        -/tmp/health
      initialdelayseconds:15
      timeoutseconds:1



2) Tcpsocketaction: TCP checking is performed through the container IP address and port number, which indicates a healthy container if TCP connections can be established.



APIVERSION:V1
kind:pod
metadata:
  name:pod-with-healthcheck
Spec:
  containers:
  - Name:nginx
    Image:nginx
    ports:
    -containnerport:80
    livenessprobe:
      tcpsocket:
        Port:
      timeoutseconds:1 initialdelayseconds:15
      



3) Httpgetaction: Calls the HTTP GET method through the container's IP address, port number and path, if the response state code is greater than or equal to 200 and less than 400, then the container is considered healthy.



APIVERSION:V1
kind:pod
metadata:
  name:pod-with-healthcheck
Spec:
  containers:
  -Name: Nginx
    Image:nginx
    ports:
    -containnerport:80
    livenessprobe:
      httpget:
        path:/_status/ Healthz
        port:80
      initialdelayseconds:15
      timeoutseconds:1

8. Pod scheduling


In the Kubernetes cluster, pod (container) is the carrier of the application, generally through the RC, deployment, Daemonset, job and other objects to complete the pod scheduling and self-healing functions. 8.1. RC, Deployment: Automatic dispatching





The function of RC is to keep the specified number of pods running in the cluster at all times.





The main scheduling strategies are: System built-in scheduling algorithm [optimal node] nodeselector[directed scheduling] nodeaffinity[affinity scheduling] 8.1.1. Nodeselector[directed Dispatch]





K8s Kube-scheduler is responsible for the implementation of POD scheduling, the internal system through a series of algorithms to finally calculate the best target node. If you need to dispatch the pod to a specified node, you can do so by matching the node's label (label) with the Nodeselector property of the pod.





1. Kubectl label nodes {node-name} {Label-key}={label-value}





2, Nodeselector:
{Label-key}:{label-value}





If the same label is given to multiple node, then scheduler selects an available node from this set of node according to the scheduling algorithm.





If the Nodeselector label of the pod does not have a corresponding label in node, the pod cannot be scheduled successfully.





use Scenes for node tags:





A different label for different types of node in the cluster can control the scope of the application running node. such as Role=frontend;role=backend;role=database. 8.1.2. nodeaffinity[Affinity Dispatch]





Nodeaffinity is the node affinity scheduling strategy, nodeselector for exact match, nodeaffinity for the conditional range, through in (belonging to), Notin (does not belong to), Exists (there is a condition), Doesnotexist (non-existent), Gt (greater than), Lt (less) and other operators to select node, so that scheduling more flexible. Requiredduringschedulingrequiredduringexecution: Similar to Nodeselector, but when node does not meet the condition, the system will remove the pod from the node that was previously scheduled. Requiredduringschedulingignoredduringexecution: Similar to the previous one, the difference is that when node does not meet the condition, the system does not necessarily remove the pod from the node that was previously scheduled. Preferredduringschedulingignoredduringexecution: Specifies which node should be more prioritized for scheduling in node that satisfies the schedule condition. Also, when node does not meet the condition, the system does not necessarily remove the pod from the node that was previously scheduled.





If both Nodeselector and nodeaffinity are set, the system will need to meet both settings to schedule. 8.1.3. Daemonset: Specific scene scheduling





Daemonset is a new resource object in the kubernetes1.2 version that manages only one copy of the pod on each node in the cluster.











This usage applies to scenarios where a glusterfs storage or ceph storage daemon process is run on each node. Run a log capture program on each node: Fluentd or Logstach. Run a health program on each node to capture performance data for the node, such as Prometheus Node exportor, COLLECTD, New Relic agent, or ganglia gmond.





Daemonset's pod scheduling strategy is similar to RC, in addition to using the system built-in algorithm for each node scheduling, or through the Nodeselector or nodeaffinity to specify the node range to meet the conditions of the scheduling. 8.1.4. Job: Batch scheduling





Kubernetes supports batch-type applications starting with version 1.2, and you can define and start a batch task by kubernetes the job resource object. The batch task typically initiates multiple computing processes (or serial) to process a batch of work items (work item), and the entire batch task ends after processing. 8.1.4.1. Three modes of batch processing











Batches are grouped into the following modes, depending on how the task is implemented:





Job Template Expansion mode
A Job object corresponds to a work item to be processed, and several work item produces several separate jobs, with a scenario where the amount of data per work item is larger than the number of work item. For example, there are 10 files (Work item) and each file (Work item) is 100G.





Queue with Pod per Work Item
Use a task queue to store work item, a job object as a consumer to complete these work item, where the job will start n pod, each pod corresponds to a work item.





Queue with Variable Pod Count
Use a task queue to store work item, a job object as a consumer to complete these work item, where the job will start n pod, each pod corresponds to a work item. but the number of pods is variable . 8.1.4.2. Three types of Job





1) non-parallel Jobs





Usually a job starts only one pod, and unless the pod exception reboots the pod, the job ends when the pod ends normally.





2) Parallel Jobs with a fixed completion count





A parallel job starts multiple pods, at which point the. spec.completions parameter of the job needs to be set to a positive number, and the job ends when the normal end of the pod reaches that value.





3) Parallel Jobs with a work queue





Task queue Parallel job requires a separate queue,work item is stored in a queue and the. Spec.completions parameter of the job cannot be set.





Characteristics of the job at this time: each pod can independently determine and decide whether or not there is a task item that needs to be handled if a pod ends normally, the job does not start a new pod. If a pod succeeds, there should be no other pod still working, and they should all be in a state of impending completion and exit. If all the pods are finished, and at least one pod ends successfully, the entire job is 9 successful. Pod Scaling





In K8s, the RC is used to keep the specified number of instances in the cluster, and the scale mechanism of the RC can complete the pod expansion and contraction (scaling). 9.1. Manual telescopic (scale)



Kubectl scale RC Redis-slave--replicas=3

9.2. Automatic scaling (HPA)


The horizontal pod autoscaler (HPA) controller is used to enable automatic pod scaling based on CPU usage. The HPA controller is based on Master Kube-controller-manager service startup parameter –horizontal-pod-autoscaler-sync-period definition is duration (default 30 seconds), Periodically monitor the CPU usage of the target pod and adjust the number of pod replicas in Replicationcontroller or deployment to match the user-defined average POD CPU utilization when conditions are met. Pod CPU utilization originates from the Heapster component, so the component needs to be installed.





You can create it either quickly by using the Kubectl Autoscale command or with a YAML configuration file. An RC or Deployment object must already exist before it is created, and the pod in that RC or deployment has to define the RESOURCES.REQUESTS.CPU resource request value so that the CPU of the pod can be heapster. 9.2.1. Created by Kubectl Autoscale





For example:





Php-apache-rc.yaml



APIVERSION:V1
kind:replicationcontroller
metadata:
  name:php-apache
Spec:
  replicas:1
  Template:
    metadata:
      name:php-apache
      Labels:
        app:php-apache
    Spec:
      containers:
      - Name:php-apache
        image:gcr.io/google_containers/hpa-example Resources
        :
          requests:
            cpu:200m
        ports:
        -containerport:80



To create a Php-apache RC



Kubectl create-f Php-apache-rc.yaml



Php-apache-svc.yaml



APIVERSION:V1
kind:service
metadata:
  name:php-apache
Spec:
  ports:
  -port:80
  Selector:
    App:php-apache



Create a Php-apache service



Kubectl create-f Php-apache-svc.yaml



Create HPA Controller



Kubectl Autoscale RC Php-apache--min=1--max=10--cpu-percent=50

9.2.2. Created by Yaml configuration file


Hpa-php-apache.yaml



APIVERSION:V1
kind:horizontalpodautoscaler
metadata:
  name:php-apache
Spec:
  scaletargetref:
    apiversion:v1
    kind:replicationcontroller
    name:php-apache
  minreplicas:1
  maxreplicas:10
  targetcpuutilizationpercentage:50



Create HPA



Kubectl create-f Hpa-php-apache.yaml



View HPA



Kubectl Get HPA

pod Rolling Upgrades


A rolling upgrade in k8s is accomplished by executing the KUBECTL rolling-update command, which creates a new RC (in the same namespace as the old RC), and then automatically controls the number of pod replicas in the old RC to be reduced by 0. At the same time, the number of pod replicas in the new RC is gradually increased from 0 to added value, but the number of pod replicas (including new pod and old pod) in the rolling upgrades remains the original expected value. 10.1. Through the configuration file implementation





Redis-master-controller-v2.yaml



APIVERSION:V1
kind:replicationcontroller
metadata:
  name:redis-master-v2
  Labels:
    name: Redis-master
    version:v2
Spec:
  replicas:1
  selector:
    name:redis-master
    version:v2
  Template:
    metadata:
      Labels:
        name:redis-master
        version:v2
    Spec:
      containers:
      -Name:master
        image:kubeguide/redis-master:2.0
        ports:
        -containerport:6379



Note: RC's name (name) cannot be the same as the old RC's name in selector, there should be at least one label different from the old RC label to identify it as a new RC. For example, the label of version is added to this example.





Run Kubectl rolling-update



Kubectl rolling-update redis-master-f Redis-master-controller-v2.yaml

10.2. Implementation via KUBECTL rolling-update command
Kubectl rolling-update Redis-master--image=redis-master:2.0



Unlike using a configuration file, the result is that the old RC is removed and the new RC still uses the old RC's name. 10.3. Upgrade Rollback





Kubectl rolling-update plus parameter –rollback implement rollback operation



Kubectl rolling-update redis-master--image=kubeguide/redis-master:2.0--rollback



Reference to the "Kubernetes Authority Guide"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.