[TOC]
Introduction to Rolling Updates
When a service in a kubernetes cluster needs to be upgraded, it is a traditional practice to go offline with the service that is being updated, to update the version and configuration after the business is stopped, and then to restart and provide the service. If the business cluster is large, this work becomes a challenge, and all stops first, and the gradual escalation will result in the service being unavailable for a long time. Kubernetes provides a rolling update (rolling-update) approach to solve the above problem.
Simply put, a rolling update is a way to upgrade an uninterrupted service to a multi-instance service. In general, for multi-instance services, a rolling update takes the form of a separate update of each instance, rather than a full update of all instances at the same time.
For a service k8s a cluster deployment, rolling update means to update only one pod at a time and update it individually, instead of shutdown all pods under the service at the same time to avoid business interruption.
Relationships between Service, Deployment, RS, RC, and pod
For the application we are deploying, it is generally composed of multiple abstract service. In Kubernetes, a service uses the label selector to match out a pods collection, which is the endpoint of the service and the entity that actually hosts the business. The number of deployments, schedules, and replicas of pods within a cluster is managed through higher-level abstractions such as deployment or RC. Such as:
The new version of Kubernetes recommends replacing the Replicationcontroller with deployment, the replica Set that hides behind the deployment that actually works in keeping the number of pod copies.
As a result, we can see that the rolling update of the service on Kubernetes is essentially the rolling update to the Pod collection that the service is match out to, while controlling pod deployment, Scheduling and replica scheduling is exactly the deployment and replication controller, so the latter two are the real entities that the Kubernetes service rolling update really faces.
Update with Kubectl rolling-update
The way to use the Kubectl rolling-update command is primarily for pods created with RC.
Let's take a look at the following example to create an RC nginx-demo-v1-rc.yml with 4 Nginx replicas:
apiVersion: v1kind: ReplicationControllermetadata: name: nginx-demo-v1spec: replicas: 4 selector: app: nginx-demo ver: v1 template: metadata: labels: app: nginx-demo ver: v1 spec: containers: - name: nginx-demo image: nginx:1.10.1 ports: - containerPort: 80 protocol: TCP env: - name: NGX_DEMO_VER value: v1
Create a SERVICE,NGINX-DEMO-SVC.YML content as follows:
apiVersion: v1kind: Servicemetadata: name: nginx-demo-svcspec: ports: - port: 80 protocol: TCP selector: app: nginx-demo
Create RC and service:
kubectl create -f nginx-demo-v1-rc.ymlkubectl create -f nginx-demo-svc.yml
Once created, you can view the value of the Ngx_demo_ver by accessing the environment variables of either pod for V1
Now we create a nginx-demo-v2-rc.yml file to upgrade the existing pod:
apiVersion: v1kind: ReplicationControllermetadata: name: nginx-demo-v2spec: replicas: 4 selector: app: nginx-demo ver: v2 template: metadata: labels: app: nginx-demo ver: v2 spec: containers: - name: nginx-demo image: nginx:1.11.9 ports: - containerPort: 80 protocol: TCP env: - name: NGX_DEMO_VER value: v2
To perform an update operation:
kubectl rolling-update nginx-demo-svc -f nginx-demo-v2-rc.yml
It is important to note that the two versions of the Yml file differ when performing a rolling upgrade:
- RC name cannot be the same as the old RC name
- At least one label in selector should be different from the old RC label to identify it as the new RC.
We can view the complete process of the update by doing the following:
kubectl rolling-update nginx-demo-v1 --udpate-period=10s -f nginx-demo-v2-rc.yml
After all the old pods are replaced by the new pod, the update is complete.
Use Kubectl rolling-update to implement the lack of rolling updates:
- The logic of Rolling-update is done by Kubectl issuing n commands to Apiserver, which is likely to cause an update outage due to network causes
- Need to create a new RC with the same name as the RC to be updated
- Rollback also needs to execute rolling-update, just replace the new version with the old version
- Service execution Rolling-update is not logged in the cluster and subsequent rolling-update history cannot be traced
Nowadays, RC's way has been replaced by deployment.
Deployment's Rolling-update
Kubernetes's deployment is a higher level of abstraction. Deployment creates a replica Set that is used to guarantee the number of copies of the pod in the deployment. To rolling-update the pod in deployment, just modify deployment's own Yml file and apply it. This modification creates a new replica Set that, while increasing the number of pods in this new RS, reduces the old Rs pod until it is fully upgraded. And it all happens on the server side and does not require KUBECTL participation.
Create a deployment yml file Nginx-demo-dm.yml:
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-demospec: replicas: 4 selector: matchLabels: app: nginx-demo minReadySeconds: 10 template: metadata: labels: app: nginx-demo version: v1 spec: containers: - name: deployment-demo image: nginx:1.10.1 ports: - containerPort: 80 protocol: TCP
Create the deployment:
kubect create -f nginx-demo-dm.yml --record
Then we can directly modify the deployment file, as follows:
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-demospec: replicas: 4 selector: matchLabels: app: nginx-demo minReadySeconds: 10 template: metadata: labels: app: nginx-demo version: v2 spec: containers: - name: deployment-demo image: nginx:1.11.9 ports: - containerPort: 80 protocol: TCP
Altogether changed two places, changed version to V2, and changed the Nginx image from 1.10.1 to 1.11.9, do the following to apply the changes:
kubectl apply -f nginx-demo-dm.yml --record
At this time, we can check the change of Rs by executing kubectl get RS to confirm whether the upgrade is being performed. You can also view the detailed rolling-update process by kubectl describe deployment Nginx-demo . You can also view the status of the update by kubectl rollout status Deployment/nginx-demo .
In addition to applying changes using the Apply method, there is another way to upgrade directly. is to edit the deployment file by Kubectl edit Nginx-demo-dm.yml , save later, do not need to execute apply, will automatically complete the upgrade.
We can note that when performing the deployment operation, a--record parameter is used to tell the history of the Apiserver record update. You can view the update history by using the following command:
kubectl rollout history deployment nginx-demo
To view the details of a specified revision:
kubectl rollout history deployment hello-deployment --revision=2
It should be noted that the old RS will not be deleted after the upgrade is complete, and this information will be stored on the server side to facilitate rollback.
The rollback of the pod under deployment is fairly straightforward, and executing rollout undo directly will roll the deployment back to the previous revision recorded in the record:
kubectl rollout undo deployment nginx-demo
Perform the following actions to roll back to the specified version:
kubectl rollout undo deployment hello-deployment --to-version=2
Reference: http://tonybai.com/2017/02/09/rolling-update-for-services-in-kubernetes-cluster/
Kubernetes Service Rolling Update