Kubernetes's Controllers Three

Source: Internet
Author: User
Tags dns names fluentd k8s

Statefulsets

Statefulset is the workload API, object used to manage stateful applications.

Note: Statefulsets is stable (GA) in 1.9.

Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of These Pods.

Like a Deployment, a statefulset manages Pods that is based on an identical container spec. Unlike a Deployment, a statefulset maintains a sticky identity for each of the their Pods. These pods is created from the same spec, but is not interchangeable:each have a persistent identifier that it maintains Across any rescheduling.

A Statefulset operates under the same pattern as any other Controller. You define your desired-a Statefulset object, and the Statefulset controller makes any Necessar Y updates to get there from the current state.

    • Using statefulsets
    • Limitations
    • Components
    • Pod Selector
    • Pod Identity
      • Ordinal Index
      • Stable Network ID
      • Stable Storage
      • Pod Name Label
    • Deployment and Scaling Guarantees
      • Pod Management Policies
        • Orderedready Pod Management
        • Parallel Pod Management
    • Update Strategies
      • On Delete
      • Rolling Updates
        • Partitions
Using statefulsets

Statefulsets is valuable for applications this require one or more of the following.

    • Stable, unique network identifiers.
    • Stable, persistent storage.
    • Ordered, graceful deployment and scaling.
    • Ordered, graceful deletion and termination.
    • Ordered, automated rolling updates.

In the above, stable are synonymous with persistence across Pod (re) scheduling. If an application doesn ' t require any stable identifiers or ordered deployment, deletion, or scaling, you should deploy Yo UR application with a controller, that provides a set of stateless replicas. Controllers such as Deployment or replicaset May is better suited to your stateless needs.

Limitations
    • Statefulset is a beta resource prior to 1.9 and not available in any Kubernetes release prior to 1.5.
    • as with any Alpha/beta resources, you can disable Statefulset through the  --runtime-config  option passed to the apiserver.
    • the storage for a given Pod must either is provisioned by A persistentvolume provisioner based on the reques ted  storage class , or pre-provisioned by an admin.
    • Deleting and/or scaling a statefulset down will , not  delete the volumes associated with the Stateful Set. This was done to ensure data safety, which was generally more valuable than a automatic purge of all related Statefulset re Sources.
    • statefulsets currently require a headless Service to is responsible for the network identity of the Pods. You is responsible for creating the This Service.
Components

The example below demonstrates the components of a statefulset.

    • A Headless Service, named Nginx, is used to control the network domain.
    • The Statefulset, named Web, have a Spec that indicates that 3 replicas of the Nginx container would be launched in unique Po Ds.
    • The volumeclaimtemplates would provide stable storage using Persistentvolumes provisioned by a persistentvolume provisioner.
ApiVersion:v1kind:Servicemetadata:  name:nginx  Labels:    app:nginxspec:  ports:  -port:80    Name:web  clusterip:none  selector:    app:nginx---apiversion:apps/v1kind:statefulsetmetadata:  name : Webspec:  selector:    matchlabels:      App:nginx # have to match. Spec.template.metadata.labels  ServiceName: "Nginx"  Replicas:3 # By default are 1  Template:    metadata:      Labels:        App:nginx # has to match. Spec.selector.matchLabels    Spec:      terminationgraceperiodseconds:10      containers:      -Name : Nginx        image:k8s.gcr.io/nginx-slim:0.8        ports:        -containerport:80          name:web        volumemounts :        -name:www          mountpath:/usr/share/nginx/html  volumeclaimtemplates:  -metadata:      name:www    Spec:      accessmodes: ["readwriteonce"]      storageclassname:my-storage-class      resources:        Requests:          Storage:1gi
Pod Selector

You must set spec.selector The field of a statefulset to match the labels of its .spec.template.metadata.labels . Prior to Kubernetes 1.8, the spec.selector field is defaulted when omitted. In 1.8 and later versions, failing to specify a matching Pod Selector would result in a validation error during Statefulset Creation.

Pod Identity

Statefulset Pods has a unique identity, which is comprised of an ordinal, a stable network identity, and stable storage. The identity sticks to the Pod, regardless of which node it's (re) scheduled on.

Ordinal Index

For a statefulset with N replicas, each Pod in the Statefulset would be is assigned an integer ordinal, in the range [0,n], th At is a unique over the Set.

Stable Network ID

Each pod in a statefulset derives their hostname from the name of the Statefulset and the ordinal of the pod. The pattern for the constructed hostname is  $ (statefulset name)-$ ( Ordinal) . The example above would create three Pods named  web-0,web-1,web-2 . A Statefulset can use a headless service to control the domain of its Pods. The domain managed by this service takes the form:  $ (Service name). $ ( namespace). svc.cluster.local , where "cluster.local" is the cluster domain. As each Pod is created, it gets a matching DNS subdomain, taking the form:  $ ( Podname). $ (governing service domain) , where the governing service is defined by The  serviceName field on the Statefulset.

Here is some examples of choices for Cluster Domain, Service name, Statefulset name, and how that affects the DNS names F Or the Statefulset ' s Pods.

Note that Cluster Domain would be set cluster.local to unless otherwise configured.

Stable Storage

Kubernetes creates one persistentvolume for each volumeclaimtemplate. In the Nginx example above, each Pod would receive a single persistentvolume with a storageclass of and my-storage-class 1 Gi B of provisioned storage. If no storageclass is specified and then the default Storageclass would be used. When a Pod was (re) scheduled onto a node, its volumeMounts -mount the persistentvolumes associated with its Persistentvolu Me Claims. Note that, the persistentvolumes associated with the Pods ' Persistentvolume Claims is not deleted when the Pods, or state Fulset is deleted. This must is done manually.

Pod Name Label

When the Statefulset controller creates a pod, it adds a label statefulset.kubernetes.io/pod-name , and that's set to the name of the pod. This label allows attach a Service to a specific Pod in the Statefulset.

Deployment and Scaling Guarantees
    • For a statefulset with N replicas, when Pods is being deployed, they is created sequentially, in order from {0..n-1}.
    • When Pods is being deleted, they is terminated in reverse order, from {N-1. 0}.
    • Before a scaling operation is applied to a Pod, all of its predecessors must are Running and ready.
    • Before a Pod is terminated, and all of it successors must be completely shutdown.

The statefulset should not specify a pod.Spec.TerminationGracePeriodSeconds of 0. This practice is unsafe and strongly discouraged. For further explanation, please refer-force deleting Statefulset Pods.

When the Nginx example above is created, three Pods would be deployed in the order web-0, Web-1, Web-2. Web-1 won't be deployed before web-0 are Running and ready, and web-2 'll not being deployed until Web-1 is Running and Re Ady. If web-0 should fail, after web-1 are Running and ready, but before Web-2 is launched, Web-2 would not be launched until web -0 is successfully relaunched and becomes Running and ready.

If a user were replicas=1 to scale the deployed example by patching the statefulset such so, Web-2 would be terminat Ed first. Web-1 would not being terminated until web-2 is fully shutdown and deleted. If web-0 were to fail after web-2 have been terminated and is completely shutdown, but prior to Web-1 ' s termination, web-1 Would not being terminated until web-0 is Running and ready.

Pod Management Policies

In Kubernetes 1.7 and later, Statefulset allows your to relax it ordering guarantees while preserving its uniqueness and I Dentity guarantees via .spec.podManagementPolicy its field.

Orderedready Pod Management

OrderedReady pod Management is the default for Statefulsets. IT implements the behavior described above.

Parallel Pod Management

Parallel pod Management tells the Statefulset controller to launch or terminate all Pods on parallel, and to not Wai T for Pods to become Running and ready or completely terminated prior to launching or terminating another Pod.

Update Strategies

In Kubernetes 1.7 and later, Statefulset ' s .spec.updateStrategy field allows your to configure and disable automated rolling UPDA TES for containers, labels, resource request/limits, and annotations for the Pods in a statefulset.

On Delete

The OnDelete Update strategy implements the legacy (1.6 and prior) behavior. It is the default strategy spec.updateStrategy when was left unspecified. When a statefulset .spec.updateStrategy.type 's OnDelete is set to, the Statefulset controller would not automatically update the Pods in a statefulset. Users must manually delete Pods to cause the controller to create new Pods this reflect modifications made to a statefulse T ' s .spec.template .

Rolling Updates

The RollingUpdate Update strategy implements automated, rolling update for the Pods in a statefulset. When a statefulset .spec.updateStrategy.type 's RollingUpdate is set to, the Statefulset controller would delete and recreate each Pod In the Statefulset. It would proceed in the same order as pod termination (from the largest ordinal to the smallest), updating all pod one at A time. It'll wait until an updated Pod was Running and ready prior to updating its predecessor.

Partitions

TheRollingUpdate Update strategy can partitioned, by specifying a.spec.updateStrategy.rollingUpdate.partition. If a partition is specified, all Pods with an ordinal that's greater than or equal to the partition would be updated when The Statefulset ' s.spec.templateis updated. All Pods with a ordinal that's less than the partition won't be updated, and, even if they was deleted, they'll be Recreated at the previous version. If a statefulset ' s.spec.updateStrategy.rollingUpdate.partitionis greater than it.spec.replicas, updates to its .spec.templateWon't is propagated to its Pods. In the most cases you won't need to use a partition, but they is useful if you want to the stage an update, rolled out a canary, or perform a phased roll out

Daemon Sets
    • What is a daemonset?
    • Writing a Daemonset Spec
      • Create a Daemonset
      • Required fields
      • Pod Template
      • Pod Selector
      • Running Pods on only Some Nodes
    • How Daemon Pods is scheduled
    • Communicating with Daemon Pods
    • Updating a Daemonset
    • Alternatives to Daemonset
      • Init Scripts
      • Bare Pods
      • Static Pods
      • Deployments
What is a daemonset?

A Daemonset ensures that all (or some) Nodes run a copy of a Pod. As nodes is added to the cluster, Pods is added to them. As nodes is removed from the cluster, those Pods is garbage collected. Deleting a daemonset would clean up the Pods it created.

Some Typical uses of a daemonset are:

    • Running a cluster storage daemon, such glusterd as, ceph and on each node.
    • Running a logs collection daemon on every node, such fluentd as or. logstash
    • Running a node monitoring daemon on every node, such as Prometheus node exporter, collectd Datadog agent, New Reli C agent, or Ganglia gmond .

In a, the one daemonset, the covering all nodes, the would is used for each type of daemon. A more complex setup might use multiple daemonsets for a single type of daemon, but with different flags and/or different Memory and CPU requests for different hardware types.

Writing a daemonset speccreate a daemonset

You can describe a daemonset in a YAML file. For example, the daemonset.yaml file below describes a daemonset that runs the Fluentd-elasticsearch Docker Image:

Apiversion:apps/v1kind:daemonsetmetadata:name:fluentd-elasticsearch Namespace:kube-system Labels:k8s-app:flue Ntd-loggingspec:selector:matchlabels:name:fluentd-elasticsearch template:metadata:labels:n       Ame:fluentd-elasticsearch spec:tolerations:-Key:node-role.kubernetes.io/master Effect:noschedule        Containers:-Name:fluentd-elasticsearch image:gcr.io/google-containers/fluentd-elasticsearch:1.20        Resources:limits:memory:200mi requests:cpu:100m memory:200mi Volumemounts:-Name:varlog mountpath:/var/log-name:varlibdockercontainers mou Ntpath:/var/lib/docker/containers readonly:true terminationgraceperiodseconds:30 volumes:-nam E:varlog Hostpath:path:/var/log-name:varlibdockercontainers hostpath:path:/va R/lib/docker/containers

  

Kubernetes's Controllers Three

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.