How far is it from containerized technology to modern cloud computing?

Source: Internet
Author: User
Keywords Cloud

In the next few weeks, we will be releasing a new series of blogs, in which we want to explain some of Google's views on container technology, and we will share with readers some of Google's experience in running services in containers over the last ten years. We are a team of Google's product managers, front-line technicians, and architects, and the team's common goal is to help readers understand how the container technology revolution can build and run services more effectively. This time we invited an expert from the Google cloud Platform global solution, Miles Ward, to share the opening of this series of blogs.

Hello, everyone! Welcome to our new series of blogs, and in this series, we're going to introduce one of the most fashionable areas of today's computational model innovation: containerized technology (containerization).

You may have a lot of questions: what is a container and how does it work? What does Docker and kubernetes mean, what is Google Container engine and Consolidator vm? What is the connection between them, how do we build a powerful service through containers, and make them available in a large cluster of production environments? How can users get business value with this technology? Well, we're no longer suspense, and then we're going straight to the subject. We'll start with a specific introduction to container technology, and then talk about how container technology can make us work better.

With the development of computational models (computing models), we have undergone several changes in computational model solutions. Looking back over the past 10, we can see the process of this change very clearly from the perspective of virtualization technology. Benefiting from the development of virtualization technology, we have greatly improved the efficiency of our overall resources, while at the same time the value of our work and the repetitive work done to deliver the service have been reduced accordingly. This trend has been reinforced by the advent of Multi-Tenant, API-based management, and public cloud computing technologies. One of the key breakthroughs is the change in the way resources are used. In a virtualized way, we can virtual a small, independent, on-demand CPU kernel within minutes, and this virtual CPU core feels like it's running directly outside the physical machine. So the question is, is it necessary to virtual a whole machine when you just need a small amount of resources?

Google has encountered this problem early in the day: we need to release software faster and cheaper, and the scale of computing resources needed to support the operation of the service has never been done before. To meet this demand, we need to do a higher level of abstraction of existing resources so that services can control resources through finer granularity. To this end, we have added new technology to the Linux kernel, which is known as Cgroup, through which we isolate the service runtime environment, which is called a container. This is a new virtualization technology that simplifies the underlying OS environment required for all of Google's services to run. Over the next few years until now, container-related technology has been evolving, with the advent of Docker, the impact of this technology has been further expanded, Docker is using this technology for container based applications to create an interoperable format (interoperable format).


Why use containers?
What virtual machines do container technology provide?
Simplified deployment (simple Deployment): Container technology can package your application into a single address-access, registry-stored (registry-stored) component that can be deployed with one line of command. Regardless of where you want to deploy the service, the container can fundamentally simplify your service deployment.

Fast-available (Rapid availability): Container Technology Abstracts the resources of the operating system again, rather than virtualizing the entire physical machine resource, in this way, packaged services can be launched in 1/20 seconds, compared to It may take a minute to start a virtual machine.

Micro-Service (leverage MicroServices): containers allow developers and system administrators to further subdivide computing resources, if a small virtual machine provides resources that are too large for the resources needed to run the service, or for your system, Expanding a virtual opportunity for a single time requires a lot of work, and the container may well improve the situation.

What can these advantages of container technology help you with your work?
One of the most obvious aspects is that developers can run multiple containers at the same time through their laptops, and facilitate rapid service deployment. Although it is also possible to run multiple virtual machines on a single laptop, it is clear that the container can be faster, simpler, and lighter.

Furthermore, the container can make the Service Release management easier, and releasing a new container version requires only a single command. At the same time, the test work becomes easier, in the public cloud platform, the virtual machine billing method may be at least 10 minutes (or, an entire hour?) , if you run only a single test program, you may not have much of the resources consumed by the test. However, if you run thousands of test-driven programs every day, the cost of resources can go straight up. If you use a container to do the same testing, you will need to complete the thousands of tests with the same resource consumption (the same resource consumption as using a virtual machine), which can greatly save your service costs.

Another important advantage is the combination of features that are deployed in a container, and the entire system becomes easy to assemble, especially for those systems that need to use open source software. For system administrators, the following work can be daunting: Install and configure MySQL, memcatched, MongoDB, Hadoop, Glusterfs, RABBITMQ, Node.js, Nginx, and so on, and then encapsulate the software. Provide a running platform for the service. However, these complex tasks can be accomplished by starting several containers, encapsulating them in the corresponding container, and then combining some scripts to make the containers work together as required, not only to simplify deployment but also to reduce operational risk.

If you want to build a service platform according to the process described earlier, there may be a lot of error prone areas, the entire preparation process also requires a very professional knowledge, there may be a lot of duplication of work. Therefore, we can first implement the core container components in a canonical manner, and then add them to the Public Registry service. This allows other users to obtain the required containers at any time through the registry service, and the container ecosystem with high-quality components is built.

For a long time, the most important value of container technology is to provide a lightweight, consistent format for running services on different hosts. For example, if you are building a service today, you may want to connect to a bare metal server and use a pre-defined infrastructure after virtualization, or use a shared or proprietary cloud service platform directly, and there are many PAAs providers you can choose from. However, in order to enable your service to run on different service platforms, you may need to package services in a variety of ways! And if you standardize operations in the container format, providers of these different computing models can provide users with a unique delivery experience that allows users to easily migrate workloads, and users have the option of deploying tasks on the cheapest and fastest platform to avoid being limited to a single platform provider.

Docker
There are a number of detailed introductory documents on the Web, especially here, here and here, about how container technology and docker-related technologies can be implemented. These documents suffice to say that Docker is a "great solution", that is, there may not be any other solution to match it.

Container technology enhances granularity of resource control, which is highly practical, but for services that require thousands of servers to run together, simple container technology does not substantially increase the efficiency of any workload. Today's Docker are designed only to operate on a single machine, so we can ask a series of questions: How do these workloads run in containers and containers running on a cluster should be allocated and coordinated, and how can they be managed according to the consumption of resources? How do they run in a multi-tenant network environment? How can their security be guaranteed?

Perhaps from the point of view of system design, we can ask a more essential question: Are we talking about the right resource abstraction at the moment? Most of the developers and corporate sponsors I've communicated with, they are not interested in the specified containers on the specified machine, they really want their services to be started, produce value, and are easy to monitor and maintain, they don't want to know all the trivial details (at least they want to), For example, specifies what the specified container on the machine is doing.

Kubernetes
Google solves the problem with the constant iteration of the product: we've built a management system that can be used to manage clusters, networks, and naming systems. The first version of this management system is called Brog, and its subsequent version is called Omega. With this management system, we can use container technology on Google's large-scale cluster resources. We now start about 7,000 containers per second and may be more than 2 billion containers a week. Using Google's experience in container technology and technology accumulation, we built kubernetes (sometimes abbreviated as k8s on the forums).

Kubernetes is an abstraction of resources from another perspective that allows developers and managers to focus on the behavior and performance of the service, rather than just focusing on a single component or an underlying resource.

So what are the single containers that the Kubernetes cluster does not have functionality for? It focuses on service level control rather than just container-level control, Kubernetes provides a "savvy" way to manage services as a whole. In a kubernete solution, a service can even expand itself, diagnose itself, and easily escalate. For example, in Google, we use machine learning techniques to ensure that the current state of each service running is the most efficient.

If a single container can help developers reduce the complexity of their deployments, kubernetes can minimize the complexities of working together in a team development process. Kubernets allows teams to combine services in a container, and have them deployed in accordance with specified rules to ensure that the service works correctly. In the traditional way, because of the lack of isolation, between services or between the various parts of the service is easy to interfere with each other, but through kubernetes, these contradictions can be avoided from the system point of view, in Google, through the use of this enhanced collaborative work, the developer's productivity can be improved, The availability of services is also further enhanced, which makes deployment on a large cluster more agile.

However, our technology is still at an early stage of development. At present, Kubernetes has been adopted by many customers and the company's well-known teams, including Redhat, Vmware,coreos, mesosphere and so on. These companies are eager to help their customers extract the commercial value of container technology through the kubernete scale deployment.

Container Engine
Google Container Engine introduced the idea of "container as a service" on Google's cloud platform. Based on kubernetes technology, the container engine provides developers with a way to quickly build and run containers, and the container engine can also deploy, manage, and extend the container to a set boundary. In the following article we will introduce more to the container engine.

Deployment options
We can see that containerized technology has become the beginning of the evolution of computational models, and Google plays a heavy role in this technological revolution. As the reader begins to come into contact with the container and learn more about how the container is deployed, in the actual service deployment, you can investigate the following ways and choose the one that works best:

If you plan to run a managed cluster or start dozens of containers, use Google Container engine to try it. If you want to build your own cluster on a shared infrastructure or in your own system, you can use kubernetes to manipulate it. To run a container on an already managed infrastructure, try using Google App engine or Consolidator VMs.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.