Containerized technology (containerization).
You may have a lot of doubts: what exactly is a container and how does it work? What does Docker and kubernetes mean, and what is the use of Google Container engine and managed VMS? How are they related, and how can we build a powerful service through containers and make them available in a large-scale cluster of production environments? How can you get business value by using this technology? Well, we don't suspense anymore, and then we go straight to the subject. We'll start with a concrete introduction to container technology, and then we'll tell you how container technology is enabling us to work better.
As the computational model (computing models) continues to evolve, we have experienced several changes in computational model solutions. Looking back in the past 10, we can see the process of this change very clearly from the perspective of virtualization technology. Benefiting from the development of virtualization technology, we have greatly improved the efficiency of overall resource use, while the time value of our work and the repetitive work done to deliver services have been reduced accordingly. This trend has been enhanced with the advent of multi-tenancy, API-based management, and public cloud computing technologies. The most critical breakthrough is the change in the way resources are used. By virtualizing, we can virtualize a small, standalone, on-demand CPU core in a matter of minutes, and the virtual CPU core feels like it's running directly outside the physical machine. So the question is, when you just need to use a small amount of resources, is it necessary to virtual a whole machine?
Google has encountered this problem very early: we need to release the software faster and cheaper, and the scale of the computing resources needed to support the service's operation has never been the case before, so how should this be solved? To meet this demand, we need a higher level of abstraction of existing resources, allowing services to control resources with finer granularity. To do this, we have added new technologies to the Linux kernel, which is known as Cgroup, which we use to isolate the service runtime environment, which is called a container. This is a new virtualization technology that simplifies the underlying OS environment required for all Google services to run. Over the next few years until now, container-related technologies have evolved, and the impact of this technology has been further expanded with the advent of Docker, which has created an interoperable format for container-based applications using this technology (interoperable format).
Why use containers?
What virtual machines does the container technology provide?
Simplified deployment (simple Deployment): Container technology can package your app into a single-address-accessible, registry-stored (registry-stored) component that can be deployed with just one line of command. Regardless of where you want to deploy your services, containers can radically simplify your service deployment efforts.
Fast Availability (Rapid availability): Container Technology Abstracts the resources of the operating system again, rather than virtualizing the entire physical machine resources, in this way, packaged services can be started in 1/20 seconds, in contrast, It may take a minute to start a virtual machine.
MicroServices (Leverage microservices): Containers can allow developers and systems managers to further subdivide computing resources if a small virtual machine provides resources that are too large for the resources required to run the service, or for your system, Expanding one virtual opportunity at a time requires a lot of work, and the container may well improve the situation.
What are the benefits of container technology that can help you with your work?
One of the most obvious aspects is that developers can run multiple containers at the same time with their laptops, and deploy the service quickly and easily. While it is possible to run multiple virtual machines on a single laptop, it is clear that containers can be faster, simpler, and more lightweight.
Furthermore, containers can make it easier to manage service distributions, and releasing a new container version requires only a single command to complete. At the same time, testing has become easier, and in public cloud platforms, virtual machines can be billed in at least 10 minutes (or, for an entire hour). ), if you run only a single test program, you may not be consuming as many resources as you test. However, if you run thousands of test-driven programs every day, the cost of resources can go up in a straight line. If you use containers to do the same test work, you only need to use the same resource consumption (the same resource consumption as using a single virtual machine) to complete the thousands of tests, which will greatly save your service running costs.
Another important advantage is the combination of features that are deployed in a container, and the system becomes easy to assemble, especially for systems that need to use open source software. For system administrators, the following work can be daunting: Installing and configuring MySQL, memcatched, MongoDB, Hadoop, GlusterFS, RabbitMQ, node. js, Nginx, and so on, and then encapsulating the software. Provide a running platform for the service. However, these complex tasks can be accomplished by starting several containers: encapsulating the services in the corresponding containers, and then combining some scripts to make them work together as required, not only to simplify deployment but also to reduce operational risk.
If you want to build a service platform according to the process described earlier, there may be many error prone areas, the entire formulation process also requires a very professional knowledge, there may be a lot of repetitive work. Therefore, we can first implement the core container components in a canonical manner, and then add them to the Public Registry service. This allows other users to get the required containers at any time through the registry service, and the container ecosystem with high-quality components is built.
For a long time, the most important value of container technology is to provide a lightweight, consistent format for running services on different hosts. For example, if you are building a service today, you might want to connect to a bare metal server, use a pre-defined infrastructure after virtualization, or directly use a shared or private cloud service platform, and of course there are many PAAs providers available for you to choose from. However, in order for your service to run on a different service platform, you may need to package the service in a number of ways that don't make sense! By standardizing operations in the container format, the providers of these different computing models can provide users with a unique delivery experience that allows users to easily migrate workloads, and users can choose to deploy their tasks on the cheapest and fastest platform, avoiding being confined to a single platform provider.
Docker
There are a lot of detailed documentation on how container technology and Docker-related technologies are implemented on the web, especially here, here and here. These documents are sufficient to demonstrate that Docker is a "great solution", that is, there may not be any other solution that can match it.
Container technology enhances the granularity of resource control, which does have a lot of practical value, but for services that require thousands of servers to run together, simple container technology does not inherently improve the efficiency of any workload. Today's Docker is only designed to operate on a single machine, so we can ask a series of questions about how the workloads running in the containers and containers running on the cluster should be allocated and coordinated, and how can they be managed according to the consumption of resources? How do they run in a multi-tenant network environment? How can their security be guaranteed?
Perhaps from the point of view of system design, we can ask a more fundamental question: is the right resource abstraction currently being discussed? Most of the developers I've communicated with and the company's sponsors are not interested in the designated containers on the specified machines, they really want their services to be started, generate value, and are easy to monitor and maintain, and they don't want to know all the trivial details (at least they want to), For example, specify what a specified container on a machine is doing.
Kubernetes
Google has solved this problem through a continuous iteration of the product: we have built a management system that can be used to manage clusters, networks, and naming systems. The first version of this management system is known as Brog, and its subsequent version is called Omega. With this management system, we can use container technology on Google's large-scale cluster resources. We now have about 7,000 containers per second, which can be more than 2 billion containers a week. We use Google's hands-on experience and technical expertise in container technology to build kubernetes (sometimes abbreviated to k8s on the forum).
Kubernetes abstracts Resources from another perspective, which allows developers and managers to focus on the behavior and performance of the service, rather than just on a single component or the underlying resource.
So what single container does the Kubernetes cluster offer without functionality? It focuses on service-level control rather than on container-level control, and Kubernetes provides a "savvy" way to manage services as a whole. In Kubernete's solution, a service can even self-scale, self-diagnose, and easily escalate. For example, in Google, we use machine learning technology to ensure that the current state of each running service is the most efficient.
If a single container can help developers reduce the complexity of deployment efforts, kubernetes can minimize the complexities of working together during team development. Kubernets allows teams to combine services in a container and have them deployed in accordance with the specified rules to ensure that the service works correctly. In the traditional way, due to lack of isolation, the various parts between services or services are prone to mutual interference, but through kubernetes, these contradictions can be avoided from a systematic perspective, at Google, through the use of this enhanced collaborative work, the productivity of developers to improve, The availability of services is further enhanced, which also makes deployment on large clusters more agile.
However, our technology is still at an early stage of development. At present, Kubernetes has been adopted by many customers and company's well-known teams, including Redhat, Vmware,coreos, mesosphere and so on. These companies are eager to help their customers extract the commercial value of container technology through a large-scale deployment of kubernete.
Container Engine
Google Container Engine introduces the concept of "container as a service" on Google's cloud platform. Based on kubernetes technology, the container engine provides developers with a way to quickly build and run containers, and the container engine can also deploy, manage, and extend containers at set boundaries. In the following article we will introduce more about the container engine.
Deployment Options
As we can see, containerized technology has become a starting point in the evolution of computational models, and Google is playing a heavy role in this technological revolution. As the reader begins to come into contact with the container and to understand the way the container is deployed, in the actual service deployment, you can investigate the following ways and choose the one that is most appropriate:
If you plan to run a managed cluster or launch dozens of containers, use Google Container engine to try it out. If you want to build your own cluster on a shared infrastructure or in your own system, you can use Kubernetes to do so. If you want to run a container on an already managed infrastructure, try using Google App engine or managed VMs.
The development of modern cloud computing from the perspective of container and Kubernetes technology