Managing Kubernetes workloads with Rancher 2.0

Source: Internet
Author: User
Tags kubernetes deployment

Rancher 2.0 is an open-source, enterprise-class Kubernetes platform that is now available for beta releases. Rancher 2.0 simple and intuitive interface style and operation experience, will solve the industry legacy of the long-kubernetes native UI ease of use and the learning curve steep problem. and Rancher 2.0 Creative multi-kubernetes cluster management function, it will be the perfect solution to the production environment in the enterprise users may face different infrastructure dilemma. In addition to Rancher 2.0 brings a series of monitoring, logging, CI/CD and other expansion functions, it can be said that Rancher 2.0 for enterprises in the production environment in the ground kubernetes provides a more convenient way.


Rancher 2.0 considers a number of factors in the design process. You can configure and manage kubernetes clusters, deploy user services on top, and easily control access through authentication and RBAC. And the Rancher 2.0 is one of the best places is its intuitive user interface, we hope to uncover kubernetes mysterious veil, reduce the original steep learning curve kubernetes. In this article, Rancher Labs Chief software engineer Alena will guide you through the Rancher 2.0 new user interface and will explain how to deploy a simple Nginx service in Rancher 2.0.


Design your workloads


Before you deploy workloads for your application, it is recommended that you understand these things first:


· is the application stateful or stateless?

· How many application instances do I need to run?

· What is the placement rule-does the application need to run on a specific host?

· Does your application publish to a service on a private network so that other applications can communicate with it?

· Does the application require a public access portal?


Of course there are more questions to answer, these are just some of the most basic issues, but also a good starting point. The Rancher UI will provide more detailed information about the configuration on the workload, which you can tune and upgrade later.


With rancher 2 The first workload that your department belongs to


Let's do some interesting things first-deploy some very simple workloads and use rancher to connect them to the outside world. Let's say you've installed rancher (rancher's installation is extremely simple, you can do it one click ), and at least one kubernetes cluster is configured (this may not be as simple as "one-click Deployment", but very fast ). So now all you have to do is switch to Project View and click on "Deploy" on the workloads page:



In addition to mirroring and port mapping (we'll cover more details later in this article), all options are default. I want my service to be published on a random port on each host in the cluster, and when the port hits, the traffic is redirected to the Nginx internal 80 port. After the workload has been deployed, the public ports are set up in the UI for easy access.



By clicking on the 31217 public port link, you can jump directly to your service:



As you can see, it takes only a single step to deploy the workload and publish it externally, which is very similar to rancher 1.6. If you are a kubernetes user, you know that this requires several Kubernetes objects to back up the deployment and services described above. The deployment is responsible for launching the container application, which also monitors the health of the container and restarts if a crash is generated based on a restart policy. But in order to publish the application externally, Kubernetes needs an explicitly created service object. Rancher gets the workload claims through a user-friendly interaction and creates all the required kubernetes structures in the background, which greatly simplifies the end user's work. The contents of these structures are described in the next section.


More Workload options


By default, the Rancher UI provides users with the most basic options for workload deployment. You can change these options yourself, for example, starting with changing the workload type:



Depending on the type selected, the corresponding Kubernetes resource is created.


· (n) Pods scalable Deployment--kubernetes Deployment

· Run a pod--kubernetes daemonset on each node

· State set--kubernetes Statefulset

· Run--kubernetses cronjob on the cron schedule


Depending on the type, you can also set options such as mirroring, environment variables, and labels, which define the deployment specification for your application. It is now possible to complete the application-to-external exposure through the Port mapping Section (Mapping):



With this port declaration, after the workload is deployed, it is exposed through the same random port on each node in the cluster. If you need to set a specific value instead of generating it randomly, modify the port under source port. There are several options under publish on:



Depending on the selected content, rancher will create the appropriate service object on the Kubernetes side:

· --kubernetes nodeport Service per node

· Internal cluster ip--kubernetes Clusterip service. Only in this case can you access your workloads over a private network

· The load balancer--kubernetes the Load Balancer service. Select this option only if your kubernets cluster is deployed in a public cloud (such as AWS) and has an external load balancer support (such as AWS ELB)

· node running pod--no service created; Hostport option set in Deployment specification

We've highlighted the implementation details here, but you're not actually using them. Rancher Ui/api will provide all the necessary information to access your workload with just one click on the link to the workload.


Traffic allocation between workloads when using ingress


There is also a way to publish workloads-through ingress. It not only publishes applications on standard HTTP port 80/443, but also provides L7 routing capabilities and SSL termination . This is useful if you are deploying a Web application and want to route to a different port based on host/path routing rules:



Unlike rancher 1.6, a load balancer is not suitable for a specific LB provider such as Haproxy. Implementations are different because of the cluster type. For the Google Container Engine (GCE) cluster, the load balancer is GLBC, for Amazon EKS It is AWS Elb/alb, and for digital Ocean/amazon EC2, the nginx load balancer is used. we will be introducing more load balancers in the future based on the needs of our users.


More powerful discovery of services


If you are building an application that contains multiple workloads, it is likely that you will use the DNS resolution service name. Of course you can use the API address to connect to the container, but the container may die and the IP address will change. Therefore, using DNS is the best approach. For those Kubernetes clusters created with rancher, the Kubernetes service discovery (Kubernetes services Discovery) feature is already built-in. Each workload created from the rancher UI can be resolved in the same namespace (namespace) through its name. Although it is necessary to explicitly create a kubernetes service (Clusterip type) in order to discover workloads, rancher takes this burden on the user and automatically creates services for each workload. In addition, rancher enhances service discovery by enabling users to create the following:

· aliases for DNS values

· A custom record that points to one or more existing workloads

All of the above content can be found in the user interface's workload services discovery (workloads Service Discovery) page:



As you can see, configuring the workload in Rancher 2.0 is as simple as in 1.6. Although the Rancher 2.0 backend now implements all functionality through Kubernetes, the rancher UI still simplifies the creation of workloads as before. With the rancher interface, you can expose your workloads to the outside world, place them behind a load balancer, and configure internal service discovery-all in an intuitive and simple way.

This article describes the basics of workload management. In the future we will also bring more articles about other rancher 2.0 features and functions, such as volumes, application catalogs, and so on. In addition, the UI and backend of Rancher 2.0 are constantly changing. It's possible that when you read this article, there's a cooler feature coming up, so stay tuned!


Managing Kubernetes workloads with Rancher 2.0

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.