The development track of modern cloud computing from the perspective of container and Kubernetes Technology

Source: Internet
Author: User

The development track of modern cloud computing from the perspective of container and Kubernetes Technology

[Editor's note] This article is selected from the Google Cloud Platform Blog. It is the beginning of a series of blogs that mainly introduce container technology. This article briefly introduces container technology and kubernetes, describes the advantages of container technology and Google's understanding of container technology. The container virtualization technology based on a single server can facilitate testing and deployment. However, in the production environment, the customer often faces the resources of the entire cluster. Google's kubernetes product provides a solution for cluster deployment and management of containers. kubernetes abstracts resources from another perspective and treats each service as a whole. The author believes that container technology is only the beginning of the evolution of computing models, and Google will play an important role in this new technological revolution.

In the next few weeks, we will publish a new series of blogs. In this series, we want to explain some of Google's ideas on container technology, in addition, we will share with readers some of Google's experience in running services in containers over the past decade. We are a team of Google product managers, frontline technicians, and architects, the goal of the team is to help readers understand how the "container Technology Revolution" can effectively build and run services. This time, we invited Miles Ward, an expert on "Google cloud platform Global Solutions", to share this series of blogs.

Hello, everyone! Welcome to our new series of blogs. In this series, we will introduce one of the most fashionable fields in today's computing model innovation: containerization ).

You may have a lot of questions: What is a container and how it actually works? What does Docker and Kubernetes actually mean? What is the use of Google Container Engine and Managed VM? What are the associations between them? How can we use containers to build a powerful service that can be used in large-scale clusters in the production environment? How can users use this technology to gain commercial value? Now, let's stop selling customs. Next we will go straight to the topic. We will first give a detailed introduction to the container technology, and then describe how the container technology makes us better work.

With the continuous development of computing models, we have experienced several transformations in computing model solutions. Looking back over the past 10 years, we can see this change from the perspective of virtualization technology. Thanks to the development of virtualization technology, we have greatly improved the efficiency of overall resource usage. At the same time, our time value and repetitive work for delivery services have been reduced. With the advent of multi-tenant, API-based management, and public cloud computing technologies, this trend has been continuously strengthened. The most critical breakthrough is the change in resource usage. Through virtualization, We Can Virtualize a small, independent, on-demand CPU kernel within a few minutes, this virtual CPU kernel is like running directly outside the physical machine. So the question is, when you only need to use a small amount of resources, is it necessary to virtualize a whole machine?

Google has long encountered this problem: we need to release software faster and cheaper, and the computing resources required to support service operation have never been larger, how can this problem be solved? To meet this requirement, we need to abstract existing resources at a higher level so that the service can control resources at a finer granularity. To this end, we have added a new technology for the Linux kernel, which is a well-known cgroup. We use this technology to isolate the runtime environment of services, this isolated runtime environment is called a container. This is a new virtualization technology that simplifies the underlying OS environment required for running all Google services. Over the next few years, container-related technologies have continued to develop. With the emergence of Docker, the impact of this technology has been further expanded, docker uses this technology to create an interoperable format for container-based applications ).

Why use containers?

What virtual machines does container technology provide?
Simple deployment: Container technology can package your applications into a registry-stored) you can deploy the components by using only one line of commands. No matter where you want to deploy services, containers can fundamentally simplify your service deployment.

Rapid availability: Container technology abstracts the resources of the operating system again, rather than virtualizing the resources of the entire physical machine. In this way, the packaged service can be started in 1/20 seconds. In contrast, it may take one minute to start a virtual machine.

Microservices: Containers allow developers and system administrators to further segment computing resources, if the resources provided by a small virtual machine are too large as the resources required for service operation, or for your system, one-time expansion of a virtual opportunity requires a lot of work, the container may improve the situation.

What help can these advantages of container technology bring to your work?
One of the most obvious aspects is that developers can run multiple containers at the same time through their laptops and deploy services conveniently and quickly. It is also possible to run multiple virtual machines on a laptop, but it is clear that the container method can be faster, simpler, and lightweight.

In addition, the container can make it easier to manage the service release version. to release a new container version, you only need a separate command. At the same time, the test work has become easier. On the public cloud platform, the virtual machine billing method may be at least 10 minutes (or, the whole hour ?), If you only run a single test program, the test may consume a small amount of resources. However, if you want to run thousands of test-driven programs every day, the resource costs may soar. If you use containers for the same test, you only need to use the same resource consumption (the same resource consumption as using a virtual machine) to complete these thousands of tests, this will greatly save your service operation costs.

Another important advantage is the combination of features and deployment using containers. The entire system becomes easier to combine, especially for systems that need to use open source software. System Administrators may be discouraged from installing and configuring MySQL, Memcached, MongoDB, Hadoop, GlusterFS, RabbitMQ, and Node. js, Nginx, and so on, and then encapsulate these software to provide a runtime platform for the service. However, these complex tasks can be completed by starting several containers: First encapsulating these services in the corresponding containers, and then using some scripts to make these containers collaborate as required, this operation not only simplifies deployment, but also reduces operation risks.

If you want to build a service platform based on the process described above, there may be many errors that are prone to errors. The entire preparation process also requires professional knowledge, there may be a lot of repetitive work in the middle. Therefore, we can implement the core container components in a standard way, and then add them to the public registry service. In this way, other users can obtain the required containers at any time through the registry service, and the container ecosystem with high-quality components is built like this.

For a long period of time, the most important value of container technology is to provide a lightweight and consistent format for running services on different hosts. For example, if you want to build a service today, you may first need to access bare metal servers, use pre-defined infrastructure after virtualization, or directly use a public or private cloud service platform, of course, there are also many PaaS providers for you to choose from. However, to make your services run on different service platforms, you may need to package the services in a variety of different ways! If the container format is used for standardization, providers of these different computing models can provide users with a unique delivery experience, this allows you to easily migrate workloads. You can choose to deploy jobs on the cheapest and fastest platform to avoid being limited to a single platform provider.

Docker

There are already a lot of detailed introduction documents on how to implement container technology and Docker-related technologies on the Internet, especially here, here and here. These documents are enough to explain that Docker is a "great solution". That is to say, there may be no other solutions that match it.

Container technology enhances resource control, which is indeed of great practical value, but for services that require thousands of servers to run together, the pure container technology does not substantially improve the running efficiency of any workload. Today's Docker is only designed to operate on a single machine, so we can raise a series of questions: how should these containers running on clusters and the workloads running in containers be allocated and coordinated? How can they be managed according to resource consumption? How do they run in a multi-tenant network environment? How can their security performance be guaranteed?

Perhaps from the perspective of system design, we can raise a more fundamental question: is it the correct way to abstract resources? Most developers and sponsors who have talked with me are not interested in the specified containers on the specified machine. What they really want is how their services can be started and run, it generates value and is easy to monitor and maintain. They do not want to understand all trivial details (at least they want to), such as what a specified container on a specified Machine is doing.

Kubernetes

Google has solved this problem through continuous iteration of products: we have built a management system that can be used to manage clusters, networks, and naming systems. The first version of this management system is called Brog, and its subsequent versions are called Omega. Through this management system, we can use container Technology on Google's large-scale cluster resources. We will start about 7000 containers every second, and there may be more than 2 billion containers every week. We have built Kubernetes using Google's practical experience and technical accumulation in container Technology (sometimes abbreviated as K8s on the Forum ).

Kubernetes abstracts resources from another perspective. It allows developers and managers to focus on improving service behavior and performance, rather than simply focusing on a single component or basic resource.

So what functions does the Kubernetes set provide for a single container? Kubernetes focuses on service-level control, rather than container-level control. Kubernetes provides a "smart" management method that regards services as a whole. In a Kubernete solution, a service can even be self-expanded, self-diagnosed, and easy to upgrade. For example, in Google, we use machine learning technology to ensure that the current status of each running service is the most efficient.

If a single container can help developers reduce the complexity of deployment, Kubernetes can minimize the complexity of collaborative work in the development process. Kubernets allows the team to combine services in the form of containers and deploy these containers according to the specified rules to ensure that the services can run correctly. In the traditional mode, due to the lack of isolation, services or services are prone to mutual interference. However, through Kubernetes, these conflicts can be avoided from the system perspective, in Google, developers can improve productivity and service availability by using this enhanced collaborative approach, which makes the deployment on large-scale clusters more agile.

However, our technology is still in its early stages of development. At present, Kubernetes has been adopted by many well-known teams of customers and companies, including RedHat, VMware, CoreOS and Mesosphere. These companies are eager to use the large-scale deployment of Kubernete to help their customers extract the commercial value of container technology.

Container Engine

Google Container Engine introduces the concept of "Container as a service" on Google's cloud platform. Based on Kubernetes technology, the container engine provides developers with a way to quickly build and run containers. In addition, the container engine can also deploy and manage containers and make containers expand according to the set boundaries. In the following article, we will introduce the container engine more.

Deployment options

We can see that the containerization technology has become the beginning of the evolution of computing models. Google plays a major role in this technological revolution. As readers get started with containers and learn more about container deployment methods, in actual service deployment, you can investigate the following methods and select the most suitable one:

If you want to run a managed cluster or start dozens of containers, use the Google Container Engine for a try. If you want to build your own cluster on a common infrastructure or in your own system, you can use Kubernetes for operations. To run a container on a Managed infrastructure, you can try using Google App Engine or Managed VMs.

Finally, we should note that we are very interested in your experience in using container technology, our requirements and ideas for container Technology (or even every request you make on github. Don't hesitate. Please contact us and we will do our best to hold as many meetings and Meetup as possible. We hope to contact you. Let's discuss how container technology can change your work. We look forward to communicating with you!

OpenStack, Kubernetes, and Mesos

Problems encountered during Kubernetes cluster construction and Solutions

For details about Kubernetes, click here
Kubernetes: click here



Original article: An introduction to containers, Kubernetes, and the trajections of modern cloud computing)

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.