Docker is now very hot, container technology is not omnipotent, but this is actually a misunderstanding, do not be hype out of the bubble fascinated eyes, this article cast speculation, rationally from the Java Programmer's point of view, listed Docker current five misunderstandings, to help you better understand the Docker advantages and problems.
Throw away the hype of the media and manufacturers, how can we better use docker more rationally?
The reason for Docker's recent attention has been obvious. How to deliver the code successfully has been bothering everyone. Traditional container technology is mess in many requirements and templates. Instead, Docker can create containers simply and repeatedly. Using Docker can deliver code faster and more naturally than other containers. Duang,docker Fire! There are also some misunderstandings and misunderstandings. Do not believe that others say Docker easy to use or not easy to use. A rational and comprehensive thinking about Docker will help you really understand whether you really need it.
This article enumerates the five major Docker from the Java perspective. But let's start with some background information. In order to better understand Docker, we consulted the fewbytes Avishai Ish-shalom, he has a wealth of Docker experience, but also devops days meeting organizers. We have enumerated these misunderstandings with him.
Main misunderstanding
1. Docker is a lightweight virtual machine
This is the most important misunderstanding when we are beginners of Docker. This misunderstanding is understandable, Docker does look a bit like a virtual machine. There are even comparisons between Docker and virtual machines on the Docker website. However, Docker is actually not a lightweight virtual machine, but an improved Linux container (LXC). Docker and virtual machines are completely different, and if you use the Docker container as a lightweight virtual machine, you will encounter many problems.
Before using Docker, you must understand that there are many essential differences between the Docker container and the virtual machine.
Resource isolation: Docker does not reach the level of resource isolation that a virtual machine can provide. The resources of a virtual machine are highly isolated, and Docker needs to share resources from the very beginning of the design, which Docker cannot isolate and protect, such as page caching and kernel entropy pooling. (Note: The kernel entropy pool is interesting, it collects and stores random bits generated by system operations.) Machines use this pool when they need to be randomized, such as password-related. If the Docker container consumes these shared resources, other processes can wait until those resources are freed.
Overhead: Most people know that the CPU and RAM of a virtual machine provide the performance of a similar physical machine, but with a lot of extra io overhead. Because the package of the guest os,docker of the virtual machine is smaller, it requires less storage overhead than a virtual machine. But that does not mean that Docker has no overhead problems. The Docker container still needs to be aware of the IO overhead, except that the virtual machine is not serious.
Kernel usage: Docker containers and virtual machines are completely different in kernel use. Each virtual machine uses one kernel. The Docker container shares the kernel among all containers. The shared kernel brings some efficiencies, but at the expense of high availability and redundancy. If the virtual machine has a kernel crash, only the virtual opportunity on this kernel is affected. And the Docker container if the kernel crashes, all containers will be affected.
2. Docker makes application scalable
Because Docker can deploy code on multiple servers in a very short time, it is natural for someone to feel that Docker can make the application itself extensible. Unfortunately, this is wrong. Code is the cornerstone of the application, and Docker does not rewrite the code. The scalability of the application still depends on the programmer. Using Docker does not automatically make your code extensible, just making it easier to deploy across servers.
3. Docker is widely used in the production environment
Because Docker is gaining momentum, many people think that Docker can be used on a large scale in the production environment. In fact, this is wrong. Note that Docker is still a new technology, immature and growing, which means there are many annoying bugs and features to be perfected. Be interested in new technologies that's true, but it's best to figure out where the new technology is used and what needs to be noticed. Docker is now easy to apply to the development environment. Using Docker can easily build a lot of different environments (at least, giving people the feeling that they can build different environments), which is useful for development.
In the production environment, the Docker and imperfections also limit the use of the scene. For example, Docker does not directly support the monitoring of multiple-machine networks and resources, which makes it almost impossible to use in a production environment. There are, of course, many potential places, such as the ability to deploy the same package from the development environment directly to the production environment. There are also some Docker run-time features that are also useful for production environments. But overall, in the production environment, at present, less than the advantages. This is not to say that the production environment cannot be successfully applied, but it cannot be expected to mature and perfect at the moment.
4. Docker is a trans-os
Another misconception is that Docker can work on any operating system or environment. This may come from the analogy of loading and unloading containers, but the relationship between the software and the operating system is not as straightforward as the ship's position.
In fact, Docker is just a technology on Linux. And Docker relies on specific kernel features and must have the latest version of the kernel to do so. Depending on the differences in the OS, if you are using a generic feature that is not at the bottom of the OS, you will encounter a lot of problems. These problems may only be 1% of the rate, but when you deploy on multiple servers, 1% is also fatal.
Although Docker only runs on Linux, you can also use Docker on OS X or Windows. Using Boot2docker will run a Linux virtual machine on an OS X or Windows machine so that Docker can run in this virtual machine.
5. Security of Docker enhanced applications
It is also a misconception that Docker can improve the security of code and delivery code processes. This is also the difference between a real container and a container on the software. Docker is a container technology that adds a choreography approach. But Linux containers have some security vulnerabilities that could be compromised. Docker does not add any security layers or patches to these vulnerabilities. It's not an iron sweater to protect the application.
From a Java perspective
Some Java developers have started using Docker. Some of the features of Docker make it easier to build scalable context environments. Unlike Uber-jar,docker, it can help you package all of your dependencies (including the JVM) into a readily available mirror. This is also the most fascinating place for Docker to be a developer. However, this will also bring some hidden dangers. In general, programmers need to interact with code in different ways-monitoring it, debugging it, connecting it, tuning it ... if you use Docker, these will require additional work.
For example, we want to use Jconsole, which relies on JMX functionality, and JMX needs the network to use RMI. Using Docker is not straightforward and requires some skill to open the desired port. We initially discovered that the problem was that when we wanted to build the Takipi Docker application, we had to run a daemon outside the JVM in the container. The detailed solution is on the GitHub.
Another serious problem is that the performance tuning of the Docker container is rather difficult. When using a container, you don't know exactly how much memory each container will allocate. If you have 20 containers, the memory will be assigned to them in an indeterminate way. If you are going to use parameters to-xmx the size of the heap, it is difficult because the processing of the JVM within the Docker container depends on the amount of memory that the container is allocated to automatically. If you don't know how much memory is allocated, performance tuning is almost impossible.
Conclusion
Docker is a very interesting technique with some real and effective use scenarios. As a new technology, it also takes a lot of time to solve missing features and known bugs. However, there is a lot of hype in this field right now. But remember Oh, hype is not a success ~
Thank you for reading, I hope to help you, thank you for your support for this site!