Guide |
I used to be a container technology, especially Docker fans, but after a year to think things are not so good, and quite a few students and some companies still think the container is silver bullet, the virtual machine is yesterday yellow must be knocked down, everyone hurriedly all containers. Here I spit the groove on this view. Personal views of the representative only
|
First of all, it is clear that software development and operational activities, maintainability, correctness, performance priority is reduced in turn, do not contradicting with me a few extreme situations. About maintainability and correctness of the successively, the famous "Worse is better" article is very good helpless explanation, if you hesitate to both, this is also excusable, after all, the beauty and rough fast struggle never stop, and you if the first reaction felt that performance is the most important, then do not continue to look down, Wash and go to sleep-the right one is the best.
So for virtual machines vs containers, naturally we need to look at these three aspects as well.
Turn One: The Battle for Maintainability
virtual Machines-maintainability
From hypervisor, Xen/kvm/vsphere/hyperv are mature, proven, BSD also in the fun of Bhyve (FreeBSD) and VMM (OpenBSD), recently Unikernel is also trying to run in hypervisor , and Aws/gce/azure, and so on, the cloud giants and INTEL/AMD investments in CPU, disk and network IO virtualization technologies are clearly not immediately overturned, and the open source management of virtual machines on Linux has matured: libvirt, OpenStack, No one is full enough to get a "new open source" project to replace them, although I don't like the mess and complexity of OpenStack. VM Dynamic migration is also a mature technology, out for many years, the implementation of the principle is very simple, anyway, the entire OS memory Yiguoduan, do not worry about less dependent process memory not past. Want to use a different version of the kernel? Want to customize kernel modules? Want to adjust kernel parameters? Looking for more secure isolation? Expect an almost consistent experience with a physical machine edition? VMS are abbreviations for virtual machines, and these are masterpiece.
Container-maintainability
Linux container, the usual style of Linux, slowly evolve, do not seek careful design, and then is Cgroup, Pid/uts/ipc/net/uid namespace a realization out, to gather a container technology, seemingly UID namespace Or the feature that just came out recently. User space is more than the crowd and up, LXC,DOCKER,RKT,LXD, each has a Jian, winner, and really bad to say, in this bureau is not clear when, Mesos, Swarm, Kubernetes, Nomad and come out a bunch of spoilers, now seems to be the most eye-catching Kubernetes has the feeling of OpenStack successor, but still very tender, few people dare in the production environment large-scale use.
Cross-machine dynamic migration of processes in Linux containers I haven't heard, do not say is a service to have the cluster has HA, but there are a lot of users a service on the top of a single machine, even if there is hot standby or cold standby, online that machine memory in the things can be valuable, easily can not be lost. Linux containers can not pick the kernel, can not load kernel modules, cannot mount file system, cannot adjust kernel parameters, can not change the network configuration, and so on, do not tell me you can-did you open the Docker run--privileged? Did you not drop capability? Are you not remap UID? The container of a big company actually runs with the--privileged option. And the isolation of Linux is not completely afraid that most people do not realize that/sys,/dev,/selinux and/proc Some of the key files such as/proc/kcore not isolated.
Redhat's Project Atomic is aware of these issues and is actively working with Docker plus selinux to specify SELinux policy, but Docker's official love is not, and is this high-end technology for SELinux a mortal player? The outcome is probably still "FAQ 1: Turn off SELinux". Linux containers are not confined to a container to run a few processes, but Docker officials in order to strengthen the "lightweight" the word brainwashing effect, make an incomparable brain remnants of the single process concept, by countless people sycophant, fortunately some people slowly realize the problem, Yelp made a Dumb-init wiped half the butt, there are countless Docker image with Runit, supervisor and the like to do/sbin/init replacement, but the problem is that to customize the startup script, need to add ssh/cron/syslog/logrotate And so on scrap-this has been solved countless times of the problem, but also to solve it again, do not feel trouble? Does anyone think the authors or Packers of these packages are better at handling service startup scripts? Like the systemd kind of pointers is the right way, deliberately consider the container environment, skip some steps, but it seems to have not done well, need to manually delete some. service files.
virtual machines vs containers
Maybe someone will say Docker pull/push more convenient Ah, Docker build more convenient ah, do not forget, VM image storage early in OpenStack solved, their handling is not a big deal, VM image build also has Hash Icorp's Packer tool, not a thing. Docker is proud of the official Docker registry in fact everyone uses base OS image, and those app-level out of trust and custom considerations will build themselves. and Docker proud layered storage is also countless tears of blood, Aufs & OVERLAYFS Pit How many people? The container community has also recently worshipped immutable deployment to make the container root filesystem read-only, regardless of whether emergency security updates or functional corrections are handled-what, you say, Docker RM && Docker Run Do you think you're done with another batch? It's so simple.
Like Linux kernel and git is the idea of a serious UNIX design, tiered stacking, the bottom of the provision of mechanism, high-level policy, picking, unfortunately, people are always easy to brainwash, At the time of accepting all kinds of tall policy, I completely forgot that mechanism was still in his own hands.
turn Two: the argument for correctness
Strong isolation, full OS experience, reserved mechanism, this is the right path. The container also hides a pit,/proc/cpuinfo and the free command output is the host OS, which pits the myriad of detection system resources to automatically determine the default thread pool and memory pool size of the program, especially in Java most common.
round three: The Battle of Performance
Container fans relish--start containers fast, and containers have less overhead. These two points are true, but are the benefits really that big? Who's got something? Create a virtual machine? Who's virtual machine life cycle averages at the minute level? Who's "Use Full boot time" on average in seconds? As for the virtual machine wasted resources too much, in fact, is a fake. In theory, the average server resource utilization should not exceed 80%, but in fact, most of the company's server resource utilization should be less than 50%, a lot of CPU, memory, local disk is a perennial waste, so the extra cost of the VM is just a waste of the original waste of resources. As for the peak I/O capability of a single machine, VMs do not defeat containers. But usually do not use the peak of the state, originally a VM multi-process dry thing, have to engage in a number of containers to run, this container overhead, how to calculate the human cost?
There is also a fantasy about containers, that is, you can run the container directly on the physical machine, the overhead is huge, the management is very convenient, and the multi-tenant strong isolation is provided by the special physical machine mode. The previous two points have been connected, and there are people using OpenStack to manage Docker containers. I'm just talking about the 3rd, one of the most overlooked problems of running a container directly on a physical machine: the physical machines used to provide cloud services are generally hardware super-good, hundreds of containers are no problem, but the problem is that users are likely to need only a few containers, so either share the physical machine with people, or waste resources white money. Even if the user needs hundreds of containers, for disaster-tolerant consideration, and can not deploy hundreds of containers to a physical machine, it is either to share the physical machine with people, or waste resources.
Programme
The above is my point of view, I am not "container black", but "practical white". AWS, Azure, GCE are the main push on the virtual machine to run containers, the virtual machine charges, which is very sensible to solve the problem: the old pure VM infrastructure does not move, billing as usual, a single physical machine can be secure multi-tenant sharing, resource isolation is guaranteed (at least more than the shared kernel), the container management software such as " Kubernetes "to users, not only to meet the user's container requirements, but also do not worry about container multi-tenant problem.
So I think: Based on the VM, to the container as a secondary point, to buy the VM, to manage their own containers, do not buy CAAS directly provided containers, do not look at the bottom of the physical machine or virtual machine. Whether using a VM or a container, it's a good thing to be cool about your application's container. Finally, the idea that VM open source management software can make a simpler thing than OpenStack?
Original from: http://blog.163.com/guaiguai_family/blog/static/2007841 ...
The virtual machine is dead, the container is the future?