Container, Docker and kubernetes--from the container of infrastructure

Source: Internet
Author: User
Tags docker compose
This is a creation in Article, where the information may have evolved or changed.
"Editor's word" is the collective idea of the operation of the company, but also a deep understanding of technology people, this article is the author in the practice of immutable infrastructure concept of the process of understanding the container, this is the first article, This paper focuses on the differences between traditional operational tools such as chef and Docker, as well as the solutions.

As a long-time engineer for operations, I often look at the simplicity and reproducibility of maintaining server work, and one of my most important working principles is "never manually operate the server". All servers must be started by the tool (provisioned) and configured to run with tools to monitor and maintain the state of the server-and my choice is chef, and of course you can have many other options such as Ansible, salt, and puppet.

In collective idea, Chef does very well, and it manages many servers within the company and external customers. But, gradually, I was aware of one of chef's flaws: The handling of change was not good. The application and infrastructure of the production environment are composed of many complex components (creatures), which are made up of a number of changing assemblies (moving parts), which have a large number of explicit or implicit dependencies between them-they can change at any time for unpredictable reasons. Some of these changes are easy to handle, such as configuration file changes or fine-tuning the system, but others are more complex, such as upgrading an application-dependent Ruby runtime version without downtime. Also, there are some adjustments to the server that require manual action, such as rebooting after upgrading the operating system kernel.

In short, with my many years of experience in using chef, it does work very well and is what I need to build a new environment and configuration system, but it becomes very cumbersome and error-prone when it comes to dealing with the requirements of upgrades, changes, and so on. Is there a tool that can alleviate the pain of server upgrade changes?

Immutable infrastructure

One solution to the problem is the "immutable infrastructure" application deployment model, which, in contrast to the infrastructure for each machine upgrade (as described above), will directly discard the old infrastructure and then move the application overall to a new infrastructure that has already been upgraded. However, in essence, the "immutable infrastructure" deployment still requires a chef-like tool to initialize, configure, and start the infrastructure, except that once an infrastructure is running, its state and configuration are not allowed to change, and if there is a configuration change requirement, a new infrastructure is started instead of the old one. Of course, this type of deployment has its own complexity, and it must be able to remove or downline an old infrastructure at any time and then start a new one-and it can happen at any time. So the question is, how do you upgrade the database? How do I re-register to an old load balancer after upgrading a new Web server? How can you upgrade a new load balancer to keep many of its Web servers out of line?

In fact, there are many technical bottlenecks in server deployment to fully implement an immutable infrastructure, which makes it difficult to implement in a real-world environment. Some tools such as: Packer can alleviate the difficulty of creating virtual machine images to some extent, but you still need to face a whole set of environment construction work, which usually takes a long time to build and build, because you generally need to download a few g of files to build a separate system.

Is there a way to take care of the immutable infrastructure at the same time, and the image file can be kept small enough? Is there a tool that allows us to package the image while removing the OS and the underlying abstraction only preserves the application's dependency on the province? This allows us to deploy only a small image, migrate the changed data, and finally save a lot of deployment time. This is the change that container technology brings to us, such as Docker and Rkt (pronounced with "rocket"). In a container-based infrastructure, the underlying servers and virtual machines are abstracted into resources such as CPU and memory.

Because I use Docker in my usual way, I use Docker as a tool to tell my point of view, and of course other similar container tools like Rkt, the effect is the same.

Docker

Docker, in a nutshell, packages the application's execution files, commands, called mirrors (Images), and then deploys the image on a host or virtual machine, whereas a running image is called a "container", and conversely, the container is the mirror of the runtime. The container runs in a closed, isolated environment in which it considers itself to be the only running program in the system, which guarantees that a host can run multiple containers at the same time, but does not know each other's presence.

The Docker image file is written once (write-once, like a CD-ROM), which makes the Docker infrastructure appear to be an immutable infrastructure. The image and container are never updated, and the new image generation is accompanied by the shutdown of the old container, which only takes a very short time relative to shutting down and starting a server or virtual machine.

So, you ask, do we collective that all the services have been migrated to Docker? My answer is no, not yet. The principle of a single responsibility for the use of a container needs to be followed: running multiple containers, each container only completes a single job. Further, the process of containerized service is the process of SOA of infrastructure, and this process is confronted with many problems and difficulties as well:
    • How do I find and communicate between containers and containers?
    • How do I decide where to run and how many containers to run?
    • How do I get log and run state information for a container run?
    • How do I deploy a new image?
    • What happens when a container crashes?
    • How do you expose only a specific part of a container to a public or intranet environment?


Up until now, answering these questions and deploying Docker applications on a large scale in collective idea is less mature than the chef tools that are currently serving. However, there are a number of tools that have emerged to solve my problems, and also provide containerized deployment scenarios for production environments, such as Docker's own Docker Compose, but we chose Google's solution--kubernetes. It's a blend of Google's experience of running billions of containers in its ultra-large data center over more than 10 years.

In my next article I will elaborate on how we use kubenetes and why it is an excellent solution for deploying Docker containers.

original link: CONTAINERS, DOCKER, and KUBERNETES Part 1 (translated: Sho Jing)
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.