analysis of pod State of k8s
Pod from creation to the end of the creation of success will be at different stages, in the source code with Podphase to represent different stages:
Podpending podphase = "Pending"
podrunning podphase = "Running"
podsucceeded podphase = "Succeeded"
podfailed podphase = "Failed"
podunknown podphase = "Unknown"
The complete creation of a pod, usually accompanied by various events, has a total of only 4 species of k8s event types:
Added eventtype = "Added"
Modified eventtype = "Modified"
Deleted eventtype = "Deleted"
Error EventType = "ERROR"
Pending's request to create a POD has been accepted by k8s, but the container does not start successfully, it may be in: Write data to Etcd, dispatch, pull mirror, start the container at any of the four stages, Pending accompanying events are usually: ADDED, Modified these two events are produced. Running pod has been bound to node nodes, and all containers have started successfully, or at least one container is running, or in a reboot. All the containers in the succeeded Pod have exited normally, and k8s will never automatically reboot these containers, typically when the job is deployed. All containers in the Failed pod have been terminated, and at least one container has been terminated (exiting a non-0 exit code or stopped by the system). Unknown for some reason, the pod state is not available, usually due to a communication error with the pod host.
The above mentioned pod State is a rough state, there are better details of state information, in the source code to say the following:
podscheduled Podconditiontype = "podscheduled"
podready podconditiontype = "Ready"
podinitialized Podconditiontype = "initialized"
podreasonunschedulable = "Unschedulable"
Type podcondition struct {
type podconditiontype ' JSON: ' type ' protobuf: ' bytes,1,opt,name=type,casttype= Podconditiontype "'
status Conditionstatus ' JSON:" Status "Protobuf:" bytes,2,opt,name=status,casttype= Conditionstatus "'
lastprobetime metav1. Time ' JSON: ' Lastprobetime,omitempty ' protobuf: ' bytes,3,opt,name=lastprobetime '
lasttransitiontime metav1. Time ' JSON: "Lasttransitiontime,omitempty" Protobuf: "Bytes,4,opt,name=lasttransitiontime" '
Reason string ' JSON: "Reason,omitempty" Protobuf: "Bytes,5,opt,name=reason" ' Message
string ' JSON: "Message,omitempty" Protobuf: " Bytes,6,opt,name=message "'
}
Podscheduled pod is in the schedule, just start scheduling, HostIP has not been bound, after the continuous scheduling, the appropriate node will bind the HostIP, and then update the ETCD data initialized all the initialization containers in pod have already started. Containers in Ready pod can provide services unschedulable cannot schedule, no suitable node
The Conditionstatus in Podcondition, which represents whether the current pod is at a certain stage (podscheduled,ready,initialized,unschedulable), and "true" means "in", " False "means not in
The following is a specific look at the events triggered from pod creation to success and changes in pod-related state by creating a pod.
Kubectl apply-f Busybox.yaml
The first step: write Data to Etcd
Event type:added
Event object:
{
"phase": "Pending",
"Qosclass": "BestEffort"
}
Step two: Start to be scheduled, but not yet dispatched to specific node, please note: podscheduled status= "true"
Event type:modified
{
"phase": "Pending",
"conditions": [
{
"type": "Podscheduled",
" Status ": True",
"lastprobetime": null,
"Lasttransitiontime": "2017-06-06t07:57:06z"
}
],
"Qosclass": "BestEffort"
}
Step three: was dispatched to the specific node HostIP bound, and was started by all initialization containers (note that the pod in my Busybox.yaml does not specify init container, so this is set to true soon), The dispatched node watch to and starts creating the container (this stage is pulling the mirror) and then creates the container, and at this point the status of ready is false, and the Containerstatus state is not waiting
Event Type:modified {"phase": "Pending", "conditions": [{"type": "Initialized", "status": "True",
"Lastprobetime": null, "Lasttransitiontime": "2017-06-06t07:57:06z"}, {"Type": "Ready", "Status": "False", "lastprobetime": null, "Lasttransitiontime": "2017-06-06t07:57:06z", "Reason": "Conta
Inersnotready ', ' message ': ' Containers with unready status: [BusyBox] '}, {' type ': ' podscheduled ', "Status": "True", "lastprobetime": null, "Lasttransitiontime": "2017-06-06t07:57:06z"}], "HostIP ":" 10.39.1.35 "," StartTime ":" 2017-06-06t07:57:06z "," containerstatuses ": [{" Name ":" BusyBox "," STA TE ": {" Waiting ": {" Reason ":" Containercreating "}}," LastState ": {}," ready ": false," Restartcount ": 0," image ":" BusyBox "," imageID ":" "}," Qosclass ":" BestEffort "}
Step Fourth: The container is created successfully, the Ready status= "true", and the status of the container is also running, at which point the corresponding pod podphase should also be running
Event Type:modified {"phase": "Running", "conditions": [{"type": "Initialized", "status": "True",
"Lastprobetime": null, "Lasttransitiontime": "2017-06-06t07:57:06z"}, {"Type": "Ready", ' Status ': ' True ', ' lastprobetime ': null, ' lasttransitiontime ': ' 2017-06-06t07:57:08z '}, {' Type
":" Podscheduled "," status ":" True "," lastprobetime ": null," Lasttransitiontime ":" 2017-06-06t07:57:06z " ], "HostIP": "10.39.1.35", "Podip": "192.168.107.204", "StartTime": "2017-06-06t07:57:06z", "Containersta Tuses ": [{" Name ":" BusyBox "," state ": {" Running ": {" Startedat ":" 2017-06-06t07:57:08 Z "}}," LastState ": {}," ready ": True," Restartcount ": 0," image ":" Busybox:latest " , "imageID": "docker-pullable://busybox@sha256:c79345819a6882c31b41bc771d9a94fc52872fa651b36771fbe0c8461d7ee558" , "Containerid": "DockEr://a6af9d58c7dabf55fdfe8d4222b2c16349e3b49b3d0eca4bc761fdb571f3cf44 "}]," Qosclass ":" BestEffort "}