The Resource configuration Checklist orchestrates docker in the form of Yaml, using the Rtetful interface style, where the main resource objects such as
Autonomous pod resource (not controlled by controller)
资源的清单格式: 一级字段:apiVersion(group/version), kind, metadata(name,namespace,labels,annotations, ...), spec, status(只读)Pod资源: spec.containers <[]object> - name <string> image <string> imagePullPolicy <string> Always, Never, IfNotPresent 修改镜像中的默认应用: command, args (容器的https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ )
Tags: (important features)
Key=value
Key: Letters, numbers,、-、.
Value: Can be empty, only the beginning and end of letters or numbers, the middle can use letters, numbers,、-、.
Example:
# Kubectl get Pods-l app (app is key value)
# Kubectl Get Pods--show-labels
# kubectl Label Pods Pod-demo release=canary--overwrite
# Kubectl Get Pods-l release=canary
标签选择器: 等值关系:=,==,!= 集合关系: KEY in (VALUE1,VALUE2,...) KEY notin (VALUE1,VALUE2,...) KEY !KEY 许多资源支持内嵌字段定义其使用的标签选择器: matchLabels:直接给定键值 matchExpressions:基于给定的表达式来定义使用标签选择器,{key:"KEY", operator:"OPERATOR", values:[VAL1,VAL2,...]} 操作符: In, NotIn:values字段的值必须为非空列表; Exists, NotExists:values字段的值必须为空列表;nodeSelector <map[string]string> 节点标签选择器,可以影响调度算法。nodeName <string>annotations: 与label不同的地方在于,它不能用于挑选资源对象,仅用于为对象提供“元数据”。
Pod life cycle
状态:Pending, Running, Failed, Succeeded, Unknown创建Pod经历的过程:->apiServer->etcd保存->scheculer->etcd调度结果->当前节点运行pod(把状态发回apiServer)->etcd保存Pod生命周期中的重要行为: 1. 初始化容器 2. 容器探测: liveness readiness (在生产环境中是必须配置的) 3. 探针类型有三种: ExecAction、TCPSocketAction、HTTPGetAction # kubectl explain pod.spec.containers.livenessProbe 4. restartPolicy: Always, OnFailure, Never. Default to Always. 5. lifecycle # kubectl explain pods.spec.containers.lifecycle.preStop # kubectl explain pods.spec.containers.lifecycle.preStart
Example 1, execaction detection
# vim Liveness-pod.yaml apiversion:v1 kind:pod metadata:name:liveness-exec-pod namespace:def Ault spec:containers:-Name:liveness-exec-container image:busybox:latest imagepullpolicy:i fnotpresent command: ["/bin/sh", "-C", "touch/tmp/healthy; Sleep 30; Rm-f/tmp/healthy; Sleep 3600 "] LivenessProbe:exec:command: [" Test ","-E ","/tmp/healthy "] Initialdelayse Conds:1 periodseconds:3# kubectl create-f liveness-pod.yaml# kubectl describe pod liveness-exec-pod state: Running Started:thu, 2018 01:39:11-0400 last state:terminated reason:er Ror Exit code:137 Started:thu, 2018 01:38:03-0400 Finished:thu, 2018 01:39:09 -0400 ready:true Restart count:1 liveness:exec [test-e/tmp/healthy] Delay=1s timeout= 1s period=3s #success =1 #failure =3
Example 2, httpgetaction detection
# vim liveness-http.yaml apiVersion: v1 kind: Pod metadata: name: liveness-httpget-pod namespace: default spec: containers: - name: liveness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 livenessProbe: httpGet: port: http path: /index.html initialDelaySeconds: 1 periodSeconds: 3# kubectl create -f liveness-http.yaml# kubectl exec -it liveness-httpget-pod -- /bin/sh rm /usr/share/nginx/html/index.html# kubectl describe pod liveness-httpget-pod Restart Count: 1 Liveness: http-get http://:http/index.html delay=1s timeout=1s period=3s #success=1 #failure=3
Example 3 httpgetaction Detection of readiness
# vim readiness-http.yaml apiVersion: v1 kind: Pod metadata: name: readiness-httpget-pod namespace: default spec: containers: - name: readiness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 readinessProbe: httpGet: port: http path: /index.html initialDelaySeconds: 1 periodSeconds: 3#kubectl create -f readiness-http.yaml# kubectl exec -it readiness-httpget-pod -- /bin/sh / # rm -f /usr/share/nginx/html/index.html# kubectl get pods -w readiness-httpget-pod 0/1 Running 0 1m 此docker的状态是0,不会对外提供服务
Example 4 lifetime Poststar is executed immediately after startup
# vim pod-postStart.yaml apiVersion: v1 kind: Pod metadata: name: poststart-pod namespace: default spec: containers: - name: busybox-httpd image: busybox:latest imagePullPolicy: IfNotPresent lifecycle: postStart: exec: command: [‘/bin/sh‘,‘-c‘,‘echo Home_Page >> /tmp/index.html‘] command: [‘/bin/httpd‘] args: [‘-f‘,‘-h /tmp‘]
Pod Controller
管理pod的中间层,运行于我们期待的状态。
Replicaset:
自动扩容和缩容,取代了之前的ReplicationController,用于管理无状态的docker。Google不建议直接使用 1. 用户期望的pod副本 2. label标签选择器,用于管理pod副本 3. 新建pod根据pod template
Replicaset instances:
1. Vim Replicaset.yaml apiversion:apps/v1 kind:replicaset Metadata:name:myapp Namespace:defau Lt Spec:replicas:2 Selector:matchLabels:app:myapp R Elease:canary template:metadata:name:myapp-pod Labels: App:myapp release:canary Environment:qa Spec:containers:-Name:myapp-container Image:ikuberne TES/MYAPP:V1 ports:-Name:http containerport:802. Kubectl create-f REPLICASET.YAML3. Kubectl get Pods NAME ready STATUS restarts age myapp-c6f58 1/1 Running 0 3s Myapp-lvjk2 1/1 Running 0 3s4. Kubectl Delete pod myapp-c6f585. Kubectl get pods (Generate a new pod) NAME ready STATUS Restarts Myapp-lvjk2 1/1 Running 0 2m myapp-s9hgr 1/ 1 Running 0 10s6. Kubectl Edit Rs MyApp replicas:2 to 57. Kubectl get pods (pod automatically grows to 5) NAME ready STATUS restarts age myapp-h2j68 1/1 Running 0 5s Myapp-lvjk2 1/1 Running 0 8m myapp-nsv6z 1/1 Running 0 5s myapp- S9hgr 1/1 Running 0 6m myapp-wnf2b 1/1 Running 0 5s # Curl 10.244.2.17 Hell o MyApp | Version:v1 | <a href= "hostname.html" >pod name</a>8. Kubectl get Pods IKUBERNETES/MYAPP:V1 change to v2 now running Docker does not upgrade from one pod # kubectl delete pod myapp-h2j68 new do Cker myapp-4qg8c has been upgraded to V2 # Curl 10.244.2.19 Hello MyApp | Version:v2 | <a href= "hostname.html" >pod name</a> only delete an online pod, the newly generated pod will be the new version, which is called Canary release. In turn, all older pods are deleted, the new version of the pod is automatically generated, this is a grayscale release, this kind of release to pay attention to the system load changes. There is also a way to publish a blueGreen releases, such as
A batch is completely updated. The exact same environment. Create a new RS2, delete RS1, or, create RS2, and RS1 parallel, change service, omni-directional RS2
Deployment
通过控制ReplicaSet来控制pod,能提供比ReplicaSet更强大的功能。支持滚动更新和回滚。支持更新节奏和更新逻辑。
# kubectl explain deploy KIND: Deployment VERSION: extensions/v1beta1 (文档的显示落后于功能,最新apps/v1)# kubectl explain deploy.spec.strategy rollingUpdate (控制更新粒度)# kubectl explain deploy.spec.strategy.rollingUpdate maxSurge () maxUnavailable (最多几个不可用) 两个不可能同时为零,即不能多也不能少。# kubectl explain deploy.spec revisionHistoryLimit (保持多少个历史)
Deployment instances
# vim Deploy.yamlapiversion:apps/v1kind:deploymentmetadata:name:myapp-deploy namespace:defaultspec:replicas:2 S Elector:matchLabels:app:myapp release:canary Template:metadata:labels: App:myapp release:canary spec:containers:-Name:myapp image:ikubernet ES/MYAPP:V1 ports:-name:http containerport:80# kubectl apply-f deploy.yaml# Kubec TL Get RS NAME desired current ready age MYAPP-DEPLOY-69B47BC96D 2 2 2 1m (69b47bc96d is the hash value of the template) # Kubectl get pods NAME ready STATUS Restarts Age MYAPP-DEPLOY-69B47BC96D-F4BP4 1/1 Running 0 3m myapp-deploy-69b47bc96d-qllnm 1/1 Runn ing 0 3m# change replicas:3# kubectl get pods NAME ready STATUS restarts age Myapp-deploy-69b47bc96D-F4BP4 1/1 Running 0 4m myapp-deploy-69b47bc96d-qllnm 1/1 Running 0 4m Myap P-deploy-69b47bc96d-s6t42 1/1 Running 0 17s# kubectl describe deploy Myapp-deploy Rollingupdatestra tegy:25% Max unavailable, 25% max Surge (default update policy) # Kubectl get pod-w-l app=myapp (Dynamic Monitoring) # change image:ikubernetes/myapp:v2# Kubectl apply-f deploy.yaml# kubectl get pod-w-L App=myapp () NAME Ready STATUS Restarts age MYAPP-DEPLOY-69B47BC96D-F4BP4 1/1 Running 0 6m myapp-deploy-69b47bc96d-qllnm 1/1 Running 0 6m myapp-deploy-69b47bc96d-s6t42 1/1 Running 0 2m myapp-deploy-67f6f6 B4DC-TNCMC 0/1 Pending 0 1s myapp-deploy-67f6f6b4dc-tncmc 0/1 Pending 0 1s myapp -DEPLOY-67F6F6B4DC-TNCMC 0/1 containercreating 0 2s myapp-deploy-67f6f6b4dc-tncmc 1/1 Running 0 4s# Kubectl Get RS () NAME desired current Ready AGEMYAPP-DEPLOY-67F6F6B4DC 3 3 3 54smyapp-deploy-69b47bc96d 0 0 0 8m# kubectl Rollout history Deployment Myapp-deploy # Kubec TL Patch Deployment myapp-deploy-p ' {"spec": {"Replicas": 5} ' patch change replicas to ' AA ' Kubectl get pod-w-L App=myapp NA ME ready STATUS Restarts myapp-deploy-67f6f6b4dc-fc7kj 1/1 Running 0 18s Myapp-deploy-67f6f6b4dc-kssst 1/1 Running 0 5m MYAPP-DEPLOY-67F6F6B4DC-TNCMC 1/1 Running 0 5m MYAPP-DEPLOY-67F6F6B4DC-XDZVC 1/1 Running 0 18s myapp-deploy-67f6f6b 4dc-zjn77 1/1 Running 0 5m# kubectl Patch deployment myapp-deploy-p ' {"spec": {"strategy": {"rollingupd Ate ": {" Maxsurge ": 1," maxunavaliable ": 0}}} ' # Kubectl describe deployment Myapp-deploy rollingupdatestrategy:0 Max Unav ailable, 1 Max Surge # moreThe new version is also available setimage Kubectl set image deployment Myapp-deploy Myapp=ikubernetes/myapp:v3 && kubectl rollout pause D Eployment myapp-deploy# Kubectl Rollout history deployment myapp-deploy# KUBECTL rollout Undo Deployment Myapp-deploy--t O-revision=1 (roll back to first version)
Daemonset
确保只运行一个副本,运行在集群中每一个节点上。(也可以部分节点上只运行一个且只有一个pod副本,如监控ssd硬盘)# kubectl explain ds# vim filebeat.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: redis namespace: defaultspec: replicas: 1 selector: matchLabels: app: redis role: logstor template: metadata: labels: app: redis role: logstor spec: containers: - name: redis image: redis:4.0-alpine ports: - name: redis containerPort: 6379
Apiversion:apps/v1 kind:daemonset metadata:name:myapp-ds Namespace:default spec:selecto R:matchlabels:app:filebeat release:stable Template:metadata: Labels:app:filebeat release:stable spec:containers:-Name:filebeat Image:ikubernetes/filebeat:5.6.5-alpine env:-Name:redis_host Value:redi S.default.svc.cluster.local-name:redis_log_level value:info# kubectl apply-f filebeat.yaml# Kubectl get pods-l app=filebeat-o wide (run two because the current number of nodes is 2, default cannot run in master, because Master has a stain) filebeat-ds-chxl6 1/1 Running 1 8m 10.244.2.37 node2 filebeat-ds-rmnxq 1/1 Running 0 8m 10.244.1.35 node1# kubectl logs myapp-ds-r47zj# kubectl expose deployment Redis--port=6379# Kubectl describe DS filebeat# Support Online rolling updates Kubectl Set Image DaemoNsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine# Kubectl explain Pod.spec hostnetwork ( Daemonset can directly share the host's network name, directly to provide services to the outside)
Job
按照用记指定的数量启动N个pod资源,要不要重新创建取决于任务有没有完成
Cronjob
周期任务
Statefulset
有状态应用,每一个pod副本是单独管理的。需要脚本,加入到模板中TPR: Third Party Resource(1.2开始 1.7废弃)CDR:customer defined resource(1.8开始)Operator:封装(etcd,Prometheus只有几个支持 )
Kubernetes of the second workload resource arrangement