Kubernetes Landing | Not holding, foreign companies to kubernetes migration practices

Source: Internet
Author: User
Tags elastic search vmware fusion docker swarm amazon ecs kubernetes deployment

Guide:

Kubernetes a ride on the dust to open, then the enterprise should start to kubernetes migration? Under what circumstances do you really accept it? Some of the technology frontier companies first-step practice is probably the most persuasive and reference value. This article is a good reference.

1

Kubernetes now rage, it is part of a massive cloud-native movement. All major cloud providers use it as a solution for deploying cloud-native applications. Just a few weeks ago, AWS Re-introduced Eks (Amazon Elastic Container Service for Kubernetes), a fully managed Kubernetes cluster.

This is a huge improvement, because AWS is the largest public cloud provider, and most kubernetes deployments are on AWS. The official Kops tool is now ready to deploy and manage kubernetes clusters. As Kubernetes is becoming more and more popular, businesses are trying to accept it and want to solve many common problems with kubernetes.

So, should companies really start migrating to kubernetes? Under what circumstances do you really accept it? This article will attempt to answer this question and give some challenges and suggestions for migrating to k8s and cloud native.

The problem problem

Earlier this year, we started containerized deployment of the Sematext cloud, which seems to be very simple. What the enterprise needs to do is to package all the applications, create some kubernetes configurations for their resources (such as deployments, services, etc.), and then proceed. However, it is not that easy.

The main problem is that everything that is used to manage the publishing process is not suitable for cloud native mode. Enterprise applications are not cloud-native, and they cannot use CI/CD to deploy pipelines, health checks, or use monitoring tools to record logs, metrics, and alerts. Enterprise applications can be very static and complex to deploy. This complexity does not disappear as it migrates to kubernetes, it only makes things worse. Cloud-native means decoupling the operating system from the application, which is what the container is doing.

In the evolution of the software industry, the first is waterfall flow, then agile, and now DevOps management. This is a very important problem and there are no rules to follow. Each team uses the way that works best for them, and if you want to stick to the existing routine, no problem, please do as usual. Just make sure that your daily routine applies to cloud, which means some changes are needed. Switching the entire team to embrace the DevOps principles is not easy and takes some time to prepare.

The solution Solution

This is not a solution that needs to be blindly followed. It gives an idea and explains the processes and problems that may be encountered.

The first step is to clean up all unused components first. After a few years of development, the software becomes very complex, because there are always more important things to do-new features, product fixes, and so on.

Secondly, it is not difficult to check the health of kubernetes readiness and liveness probes. Spending time managing the configuration is the hardest part. My advice is to use config maps and secrets, away from environment variables. You can use a subset of them, but do not use environment variables only for the entire configuration management. Applications should give full play to the potential of kubernetes.

The kubernetes application is communicating using the service. There are different types of services in Kubernetes, and they are treated as load balancers. The service name defined is your endpoint http://service_name:port.

If you are using a stateful application, you will want to use headless services to access specific pods. Kubernetes Service also solves the problem of service discovery to some extent. If you have used a service discovery like Consul, congratulations! You can stick with it and deploy it on kubernetes.

From a kubernetes point of view, applications are easy to extend under stateless applications. With deployment as a resource, this type of service can also manage simple upgrades-rolling updates. Of course, your application needs to handle the extension without any problems and requires some code changes to implement it.

The main problem is stateful applications, such as databases. Kubernetes provides a statefulset resource for such applications, but does not know how a particular application should react when a new node is added or fails, which is what operations people typically do when they manage. However, it is not very difficult to write an kubernetes operator.

In short, the operator is kubernetes custom resource definition (CRD), which can write or use the existing. We are using Elastic search Operator and are happy to contribute to this project. I have made some pull requests.

You may have started to connect all the clips together. There are two kinds of compute resources: The request, which specifies the minimum number of idle resources on a node, for the Kubernetes scheduler to run a specific pod, and the limit, which is the maximum number of compute resources that the pod can use. This is important, especially for Java applications. For Java applications, you also need to adjust the limits based on the heap memory requirements. Based on the knowledge of Docker CPU and memory limits, my advice is to use Java version 8u131 or later.

Labeling your app is really useful when you need to monitor containers and applications, and metadata information is often how you connect different resources through selectors. For example, deployment and services.

When you start writing a kubernetes configuration file, you feel good and think that maintenance is not a big problem. However, when you try to implement a deployment pipeline, you will find that using a bunch of configuration files is a very bad idea. At this point helm can save, Helm is a tool for packaging kubernetes applications, and I highly recommend using it to deploy pipelines. Supporting multiple environments, dependencies, versioning, rollbacks, and different hooks (considering DB migrations) is enough to describe all the benefits of helm. The bad news is that you need to learn another tool, but it's easy to learn and it's worth your time.

Enterprises do not need multiple clusters to build different environments, just use different kubernetes namespaces. For example, to create a new environment, you simply create a separate namespace, delete it after testing, and save a lot of time and money. But don't forget to be safe. Kubectl is like the root user in a cluster. Getting everyone to visit Kubectl may not be a good idea. Sometimes, creating a whole new cluster is better than managing RBAC, which can be very complex. However, it is strongly recommended that you use RBAC.

So, how will the enterprise provide the version for container mirroring, Helm packages, etc.? It depends on your choice. The most common method is to submit the ID to the image tag, and then release tag. At this point, you need to store the Docker image somewhere, a private or public registry. My advice is to use Dockerhub. This is perhaps the most cost-effective. If all the issues are resolved, you need to create a deployment pipeline. Companies can use Jenkins to deploy everything and make it one of the primary tools for DevOps.

The conclusion conclusion

More recently, the focus has been on cloud-native, not just kubernetes itself. Kubernetes is just a tool that we are migrating to Kubernetes Sematext Cloud and Sematext are in the works. It needs to be seen as a process, which is not a one-time job, and it has never been completely completed.

The process of migrating to cloud native and Kubernetes is both difficult and time-consuming, but it is very useful for corporate culture and software. Scaling is no longer a problem, and infrastructure is just a matter of code and software. Embrace kubernetes, but be prepared to face the challenge.

2
Kubernetes can be deployed in either environment

As you begin cloud-native development and deployment, take a look at Kubernetes's role and how to get more out of the orchestration.

Containers provide the ability to detach an application and its dependencies from the operating system. By packaging the operating system in a different way from virtual machine mirroring, containers save a lot of system resources: compute, memory, and disk space. Containers are also faster to download, update, deploy, and iterate. So in technology, containers have caused a small revolution and are being used by companies like Google, Microsoft and Amazon.

The small revolution in containers has also brought fierce competition to meet the needs of orchestration and management. Kubernetes, Google's Open source container Orchestration tool, has become a leading solution (alternative to Amazon ECS and Dockerswarm, etc.), thanks to three main reasons:

Cloud-Native Design: Supports the deployment and running of next-generation applications.
Open source features: Fast innovation to avoid vendor lock-in.
Portability: Deploy anywhere, whether in the cloud, in On-premise, in a virtual machine, and so on.

Shows the role that kubernetes plays in cloud-native deployments:

                                                                  Kubernetes容器编排

Kubernetes can deploy and manage container applications, including Nginx, MySQL, Apache, and many other applications. It provides the ability to lay out, scale, copy, monitor, and other functions for a container.

Once the container orchestration platform is selected, the next step is to deploy kubernetes. As mentioned earlier, Kubernetes is a portable solution. Because Kubernetes uses the same image and configuration, it is used exactly the same way on laptops, in the cloud, or locally.

Kubernetes-as-a-service

These solutions provide the ability to deploy kubernetes in a variety of infrastructures: public cloud or on-premises deployment. The advantages of choosing this approach for Kubernetes clusters include:

1. Upgrade, monitor and support through Kaas suppliers.
2. Easily expand your hybrid cloud or cloudy environment.
3. A single pane view of multiple clusters.
4. High availability, multi-master kubernetes cluster, automatic scaling according to workload.
5. General-purpose enterprise integration, such as SSO/stand-alone namespaces; and the ability to deploy applications through helm charts.
6. Cluster Federation provides a truly seamless hybrid environment across multiple clouds or data centers.

                              Kubernetes-as-a-Service(Kubernetes即服务)

Examples of kubernetes as services include PLATFORM9 and Stackpoint.io.

Managed Infrastructure

Google Cloud Platform and Microsoft Azure provide kubernetes through the Google Container Engine (gke) and Azure container Service (ACS), respectively. Containers are placed in the public cloud to start quickly, but now the enterprise's data will remain outside the network firewall.

Google Gke leads other public cloud providers. Through a cluster manager called Borg, Google has extensively used containers for internal projects and has more than 10 years of development experience (source: thenextplatform). By contrast, Microsoft's ACS is a younger product, and kubernetes support was introduced only in February 2017.

However, ACS provides flexibility: Users can choose the container Orchestration platform (Kubernetes, Docker Swarm, DCOs), and choose the option to deploy containerized applications in Windows in addition to Linux. As shown below, Gke and ACS are completely based on public cloud, and kubernetes services and infrastructure are deployed and managed by a hosting provider.

                                    Hosted Infrastructure for Kubernetes

Local deployment

Minikube is the most popular way to deploy kubernetes locally. It supports a variety of hypervisors, including VirtualBox, VMware Fusion, KVM, Xhyve, and OSS, including OSX, Windows, and Linux. The following illustration further describes the deployment of the Minikube:

                                                                      部署与Minikube

As shown above, the user interacts with the laptop deployment using the native CLI kubectl of the Minikube CLI and kubernetes. The Minikube CLI can be used to start, stop, delete, get state, and perform other operations on the virtual machine. Once the Minikube virtual machine is started, the Kubectl CLI will perform operations on the Kubernetes cluster. The following command launches an existing Minikube virtual machine and creates an nginx Kubernetes deployment: # Minikube start# cat > Example.yaml<<eof
Apiversion:apps/v1beta1
Kind:deployment
Metadata
Name:nginx-deployment
Spec
Replicas:1
Template
Metadata
Labels
App:nginx
Spec
Containers

    • Name:nginx
      Image:nginx
      Ports
      • Containerport:80
        Eofkubectl create-f Example.yaml

        (Manual downward Turn)

Original link:
1, embracing Kubernetes successfully
https://sematext.com/blog/embracing-kubernetes-successfully/
2. Deploy Kubernetes Anywhere
Https://dzone.com/articles/deploy-kubernetes-anywhere

Recommended activities:
Several people cloud Union Servicecomb, ServiceMesh Community held Building Microservice Meetup series of activities 2nd-March 31, Beijing station began to register! This issue is titled "MicroServices, from architecture to release" Still Big Coffee collection, click on the bottom of the "link please add link description" to sign up ~

Kubernetes Landing | Not holding, foreign companies to kubernetes migration practices

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.