Kubernetes Basic Concept Summary

Source: Internet
Author: User
Tags terminates etcd

1. Basic architecture

1.1 Master

The master node consists mainly of four modules: Apiserver, Scheduler, controller manager, ETCD.

    apiserver. Apiserver is responsible for providing restful Kubernetes API service, it is the unified entrance of the System Management Directive, any operation that increases and deletions the resources should be handed over to apiserver processing and then submitted to ETCD. As shown in the architecture diagram, KUBECTL (the client tool provided by Kubernetes, which is a call to the Kubernetes API inside the tool), interacts directly and apiserver.

    Schedule. Scheduler's job is to be responsible for dispatching pods to the right node. If scheduler is seen as a black box, its input is a pod and a list of multiple node, and the output is a bundle of pods and a node that will deploy the pod to this node. Kubernetes currently provides a scheduling algorithm, but also retains the interface, users can define their own scheduling algorithm according to their own needs.

    controller manager. If Apiserver is doing "front office" work, then controller manager is responsible for "backstage". Each resource generally has a controller, and controller manager is responsible for managing those controllers. For example, we create a pod through Apiserver, and when the pod is created, Apiserver's task is done. And the next guarantee that pod status is always the same as we expected the responsibility of the Controller manager to ensure.

    Etcd. ETCD is a highly available key-value storage system that Kubernetes uses to store the state of individual resources, enabling restful APIs.

1.2 Node

Each node is composed of three modules: Kubelet, Kube-proxy, runtime.

    runtime. Runtime refers to the container operating environment, currently kubernetes support Docker and Rkt two kinds of containers.

    Kube-proxy. The module implements the service discovery and reverse proxy functions in kubernetes. Reverse proxy aspect: the Kube-proxy supports TCP and UDP connection forwarding, which, by default, forwards client traffic to a set of back-end pods corresponding to the service based on the round robin algorithm. In terms of service discovery, Kube-proxy uses the ETCD watch mechanism to monitor the dynamic changes of service and endpoint object data in the cluster, and maintains a service-to-endpoint mapping relationship. This ensures that the IP changes in the backend pod do not affect the visitor. In addition, Kube-proxy supports session affinity.

    Kubelet. Kubelet is the agent of master on each node, and is the most important module on node, and it is responsible for maintaining and managing all the containers on that node, but it will not be managed if the container is not created by Kubernetes. Essentially, it is responsible for keeping the pod running state consistent with the desired state.

At this point, Kubernetes's master and node are simply introduced. Let's look at the various resources/objects in kubernetes.

2. Pod

Pod is the basic operating unit of Kubernetes and also the carrier of application operation. The entire kubernetes system revolves around pods, such as how to deploy a pod, how to ensure the number of pods, how to access pods, and so on. In addition, the pod is a collection of one or more organ containers, which can be said to be a big innovation, providing a model of a combination of containers.

2.1 Basic Operations

Create

Kubectl create-f Xxx.yaml

Inquire

Kubectl Get pod Yourpodname

Kubectl describe pod yourpodname

Delete

Kubectl Delete pod Yourpodname

Update

Kubectl Replace/path/to/yournewyaml.yaml

2.2 Pods and containers

In Docker, the container is the smallest processing unit, adding and deleting the object is a container, container is a virtualization technology, the container is isolated, isolation is based on the implementation of Linux namespace. In Kubernetes, the pod contains one or more related containers, which can be thought of as an extension of the container, a pod is an isolated body, and a set of containers inside the pod are shared (including PID, Network, IPC, UTS). In addition, the containers in the pod can access shared data volumes for file system sharing.

2.3 Mirroring

In kubernetes, the download policy for the image is:

    always: Download the latest image every time

    never: Use only local mirrors, never download

    ifnotpresent: Download the image only if it is not available locally

After the pod is assigned to node, it is mirrored and downloaded according to the image download policy, which can be used to determine the download strategy based on the characteristics of the cluster. Regardless of the strategy, make sure that the correct image is available on node.

2.4 Other Settings

With the Yaml file, you can set it in the pod:

    start commands , such as:spec-->containers-->command;

    environment variables , such as:spec-->containers-->env-->name/value;

    Port bridging , such as: spec-->containers-->ports-->containerport/protocol/hostip/ Hostport (you need to be aware of port conflicts when using Hostport, but kubernetes will check the host port for collisions when scheduling the pod, for example, when two pods require a host 80 port, The Kubernetes will dispatch the two pods to different machines respectively);

    Host Network , in some special scenarios, the container must be host network settings (such as receiving a physical machine network to receive the multicast stream), in the Pod also support the host network settings, such as:spec-->hostnetwork=true;

    data Persistence , such as:spec-->containers-->volumemounts-->mountpath;

    Restart policy , restart the container when the container in the pod terminates exiting. The so-called pod restart, in practice, is to rebuild the container, the data in the container will be lost, and if you need to persist the data, you need to use the data volume to persist the settings. Pod supports three restart strategies: Always (the default policy, when the container terminates, the container is restarted), the onfailure (restarts when the container terminates and exits abnormally), never (never restarts);

2.5 Pod life cycle

Once the pod is assigned to a node, it will not leave this node until it is deleted. When a pod fails, it is first cleared by Kubernetes, and then Replicationcontroller will rebuild the pod on the other machine (or natively), and the pod's ID changes after the rebuild, which will be a new pod. So, the migration of pods in the kubernetes, in fact, refers to rebuilding pods on new node. The life cycle diagram for the pod is given below.

life cycle callback function : Poststart (the callback function is investigated after the container is created successfully), Prestop (called before the container is terminated). In the following example, a pod is defined that contains a Java Web application container with the Poststart and Prestop callback functions set. That is, after the container is created successfully, copy the/sample.war to the/app folder. Before the container terminates, the HTTP request is sent to http://monitor.com:8080/waring, which sends a warning to the monitoring system. Specific examples are as follows:

..... containers:- image:sample:v2       name:war     lifecycle:      posrstart:       exec:         command :          -"cp"          -"/Sample.war"          -"/App"      prestop:       httpget :        host:monitor.com        /waring        8080        scheme:http
3. Replication Controller

Replication Controller (RC) is another core concept in Kubernetes, after the application is hosted in Kubernetes, Kubernetes need to ensure that the application can continue to run, which is the work of RC, It will ensure that a specified number of Pods are running at any time in the kubernetes. On this basis, RC also provides some more advanced features, such as rolling upgrades, upgrade rollback, and so on.

3.1 RC and Pod Association--label

The RC's association with the pod is achieved by means of a label. The label mechanism is an important design in kubernetes, which makes it possible to classify and select objects with the weak association of labels. For the pod, you need to set its own label to be identified, and label is a series of key/value pairs, which are set in Pod-->metadata-->labeks.

The label definition is any, but the label must be identifiable, such as setting the app name and version number of the pod. In addition, lable is not unique, in order to more accurately identify a pod, you should set the label for multiple dimensions for the pod. As follows:

"Release": "Stable", "release": "Canary"

"Environment": "Dev", "Environment": "QA", "Environment": "Production"

"Tier": "Frontend", "tier": "Backend", "tier": "Cache"

"Partition": "Customera", "Partition": "Customerb"

"Track": "Daily", "track": "Weekly"

For example, when you define the RC in the Yaml file of the RC selector label is App:my-web, then this RC will pay attention to the Pod-->metadata-->labeks label in the app: My-web's pod. Changing the label of the corresponding pod will leave the pod out of RC control. Similarly, when RC is running normally, attempting to continue creating a pod of the same label is not created. Because RC thinks that the number of copies is normal, and more will be deleted by RC.

3.2 Elastic Scaling

Elastic scaling refers to adapting to load changes and providing resources in an elastic and scalable manner. Reflected in Kubernetes, the number of copies of the pod can be dynamically adjusted according to the load. Adjusting the number of copies of the pod is accomplished by modifying the copy of the pod in RC, as shown in the following example commands:

Number of copies of pod expansion to 10

$ kubectl Scale Relicationcontroller yourrcname--replicas=

Number of copies of the Shrink pod to 1

$ kubectl Scale Relicationcontroller yourrcname--replicas=1

3.3 Rolling Upgrades

Rolling upgrade is a smooth transition of the upgrade method, through the gradual replacement of the strategy to ensure the stability of the overall system, in the initial upgrade can be timely detection, adjustment problems, to ensure that the problem will not increase the impact degree. The commands for rolling upgrades in Kubernetes are as follows:

$ kubectl rolling-update my-rcname-v1-f My-rcname-v2-rc.yaml--update-period=10s

After the upgrade begins, create the V2 version of RC based on the definition file provided, and then incrementally increase the number of pod copies of the V2 version every 10s (--update-period=10s), progressively reducing the number of copies of the V1 version pod. After the upgrade is complete, delete the V1 version of RC, keep the V2 version of RC, and implement a rolling upgrade.

During the upgrade process, when an error occurs, you can choose to continue the upgrade. Kubernetes is able to intelligently determine the state before the upgrade outage, and then proceed with the upgrade immediately thereafter. Of course, you can also perform a fallback, with the following command:

$ kubectl rolling-update my-rcname-v1-f my-rcname-v2-rc.yaml--update-period=10s--rollback

Fallback is actually the inverse of the upgrade, gradually increase the number of copies of the V1.0 version pod, and gradually reduce the number of copies of the V2 version pod.

4. Job

From the running pattern of the program, we can divide the pod into two categories: long-running Services (JBoss, MySQL, etc.) and one-time tasks (data calculation, testing). The pods created by RC are long-running services, and the pod created by the job is a one-time task.

In the job definition, the Restartpolicy (restart policy) can only be never and onfailure. The job can control the number of times the pod is completed (job-->spec-->completions) and the number of concurrent executions (job-->spec-->parallelism) for a one-time task, and after the pod has successfully executed the specified number of times, The job execution is considered complete.

5. Service

In order to adapt to the rapid business requirements, the MicroServices architecture has become the mainstream, the application of microservices architecture needs to have very good service orchestration support. The core element service in Kubernetes provides a simplified set of service proxies and discovery mechanisms that naturally adapt to microservices architectures.

5.1 Principle

In Kubernetes, when controlled by RC, the pod copy is changed, and the virtual IP is changed, such as when migration or scaling occurs. This is unacceptable for pod visitors. The service in Kubernetes is an abstraction that defines a collection of pod logic and the policies that access them, and the service's association with the pod is also done by a label. The goal of the service is to provide a bridge that provides visitors with a fixed access address that redirects to the appropriate backend on access, which allows non-kubernetes native applications to easily access the backend without having to write specific code for kubemces.

The service, like RC, is associated with a pod through a label. When you define a label in the service's selector in the service's Yaml file as App:my-web, the service will Pod-->metadata-->labeks label as the app: The My-web pod acts as the backend for the distribution request. When the pod changes (increase, decrease, rebuild, etc.), the service is updated in a timely manner. As a result, the service can serve as the access portal for the pod, acting as a proxy server, and for visitors to access through the service without having to directly perceive the pod.

It is important to note that the fixed IP assigned to the service by Kubernetes is a virtual IP, not a real IP, and is not addressable externally. On the real system implementation, Kubernetes is the virtual IP Routing and forwarding through the Kube-proxy component. So in the previous cluster deployment process, we deployed the proxy component on each node, thus realizing the kubernetes level virtual forwarding network.

5.2 Service Agent External services

Service not only can proxy pod, but also can proxy any other backend, such as running on kubernetes external MySQL, Oracle, etc. This is accomplished by defining two service and endpoints with the same name. Examples are as follows:

Mysql-service.yaml

apiVersion:v1kind:Servicemetadata:  name:mysqlspec:  ports:   33063306  protocol:tcp

Mysql-endpoints.yaml

apiVersion:v1kind:Endpointsmetadata:  name:mysqlsubsets:-192.168. 31.22  3306  protocol:tcp

After you create the service and endpoints based on the file, you can query the custom endpoints in the Kubernetes service.

5.3 Service Internal Load Balancing

When the endpoints of a service contains multiple IPs, and there are multiple back-end services, the request is load balanced. The default load balancing strategy is rotation or random (with Kube-proxy mode decision). At the same time, session hold based on the source IP address is implemented by setting Service-->spec-->sessionaffinity=clientip on the service.

5.4 Release Service

The virtual IP of the service is an internal network that is kubernetes by the virtual, which cannot be addressed externally. However, some services need to be accessed externally, such as the Web front segment. This time need to add a layer of network forwarding, that is, the external network to the intranet forwarding. Kubernetes provides three ways of Nodeport, LoadBalancer and Ingress.

    Nodeport, in the previous guestbook example, the use of Nodeport has been delayed. The Nodeport principle is that kubernetes exposes a port on each node: Nodeport, the external network can access the backend service through (either node) [Nodeip]:[nodeport].

    LoadBalancer, on nodeport basis, kubernetes can request the underlying cloud platform to create a load balancer that distributes each node as a backend for service distribution. This mode requires support from the underlying cloud platform (such as GCE).

    Ingressis a routing and forwarding mechanism in HTTP mode, which is composed of ingress controller and HTTP proxy server. Ingress controller real-time monitoring kubernetes API, real-time update HTTP proxy server forwarding rules. The HTTP proxy server has the GCE Load-balancer, HaProxy, Nginx and other open source solutions.

6, Deployment

Kubernetes provides a simpler mechanism for updating RC and pod, called deployment. By describing the cluster state you expect in Deployment, the Deployment controller will incrementally update the current cluster state to the desired cluster state at a controlled speed. Deployment's main duties are also to ensure the number and health of pods, and the 90% function is exactly the same as the replication controller, which can be seen as a new generation of replication controllers. However, it has new features outside of the replication controller:

    Replication controller Full functionality : Deployment inherits the full functionality of the Replication controller described above.

    Event and Status View : You can view the upgrade details progress and status for deployment.

     rollback: When a pod image or related parameter is upgraded, a rollback operation can be used to roll back to the previous stable version or to the specified version.

    Release Notes : Each operation on the deployment can be saved and given subsequent possible rollback use.

    pause and start : For each upgrade, you can pause and start at any time.

    multiple upgrade Scenarios : Recreate----Delete all existing pods and recreate the new one; Rollingupdate----rolling upgrades, incrementally replacing policies, while rolling upgrades support additional parameters such as setting the maximum number of unavailable pods, minimum upgrade interval, and so on.

6.1 Rolling Upgrades

Direct upgrade is possible compared to rc,deployment directly using the Kubectl edit Deployment/deploymentname or the Kubectl set method (the principle is that the pod's template changes, such as updating the label, Updating a mirrored version will trigger a rolling upgrade of deployment. Operation Example-First we also define a Nginx-deploy-v1.yaml file with a number of 2 copies:

apiversion:extensions/v1beta1kind:deploymentmetadata:  name:nginx-deploymentspec:   3   Template:    metadata:      Labels:        app:nginx    Spec:      containers:      - name: Nginx        Image:nginx:1.7. 9         ports:        

Create Deployment:

 $ kubectl create-f nginx-deploy-v1.yaml--recorddeployment  "  nginx-deployment    created$ kubectl get deploymentsname desired current up -to-date AVAILABLE agenginx -deployment 3  0  0  0 1s$ kubectl get deploymentsname desired current up -to-date AVAILABLE agenginx -deployment 3  3  3  3 18s 

After normal, the Nginx version is upgraded from 1.7 to 1.9. The first method, the direct set Image:

$ kubectl Set Image Deployment/nginx-deployment2 Nginx=nginx:1.9"Nginx-deployment2  the image updated

The second method, direct edit:

$ kubectl Edit deployment/nginx-"nginx-deployment2" edited

Finally, some basic commands of the following deployment are introduced:

$ kubectl Describe deployments  #查询详细信息, get upgrade progress $ KUBECTL Rollout pause deployment/nginx-Deployment2  #暂停升级 $ kubectl Rollout Resume deployment/nginx-deployment2  #继续升级 $ kubectl Rollout undo Deployment /nginx-deployment2  #升级回滚

For multiple upgrades, for example, when you create a nginx1.7 Deployment that requires a copy number of 5, Deployment The controller will gradually start 5 1.7 pods, and when booting to 3, you issue an update to the deployment in the Nginx to 1.9 command; deployment The controller immediately kills the 3 1.7Pod that have been started, and then gradually launches the 1.9 pod. Deployment Controller will not wait until the 1.7 pod is started, then kill 1.7 in turn, start 1.9.

6.2 Upgrade Rollback

After the upgrade is complete, fallback is possible if the new version is found to be unstable or does not meet the business requirements. Assuming we found an error in the nginx1.9 that we just completed the upgrade, we can roll back with the following command.

$ kubectl Set Image deployment/nginx-deployment Nginx=nginx:1.91"nginx-deployment  "  image updated$ Kubectl rollout status deployments nginx-for23 new replicas has been updated ...

Kubernetes Basic Concept Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.