Reprinted from: http://cloud.51cto.com/art/201410/453718.htm
Absrtact: The new container technology for Docker is hot, because it makes it possible for people to have much more applications running on the same old server, and it's easy to encapsulate and deliver programs. This article is designed to help you learn more about Docker
If you are a data center or cloud computing It community, you should have been listening to ordinary containers, especially Docker, for more than a year, and the news about them has never been interrupted. After the release of the Docker1.0 in June this year, the momentum has reached an unprecedented level.
The reason this is so big is that many companies are using Docker at an alarming rate. At this July open Source Conference (OSCon), I met countless enterprises that had already moved server applications from virtual machines (VMS) to containers. Indeed, James Turnbull, vice president of service and support at Docker, told me at the meeting that three of the largest banks have been using the beta version of Docker and are now using Docker in a production environment. For any early technology, this is undoubtedly a great confidence move, knowing that it is almost unheard of in the security-first financial world.
At the same time, Docker's open source technology is not just the darling of Linux giants such as Red Hat and canonical. Proprietary software companies such as Microsoft are also embracing Docker.
So why are we all in the pursuit of containers and Docker? James Bottomley is the chief Technology officer for server virtualization at parallels and a well-known Linux kernel developer. He explained to me that hypervisors such as Hyper-V, KVM, and Xen are "based on virtualized hardware emulation mechanisms." This means that they require a high level of system requirements. ”
However, the container uses a shared operating system. This means that they are much more efficient at using system resources than hypervisors. Instead of virtualizing the hardware, the container resides on a single Linux instance. This, in turn, means that you can "discard unused 99.9% of virtual machine garbage, leaving behind a small, simple capsule container containing your application," Bottomley says.
As a result, with a fully tuned container system, you can have up to four to six times times more server application instances on the same hardware than using Xen virtual machines or KVM VMs, according to Bottomley.
Does that sound like a good idea? After all, you can let the server run much more applications. So why didn't anyone do it before? Actually, someone has done it before. The container is actually an old concept.
Containers can be traced back to at least 2000 years and FreeBSD jails. Oracle Solaris also has a similar concept that companies such as Zones;parallels, Google and Docker have been working to develop open-source projects such as OpenVZ and LXC (Linux containers) designed to keep containers running smoothly and securely.
Indeed, few people know about containers, but most have been using them for years. Google has its own open-source container technology lmctfy (Let me contain, which means "allow me to accommodate your program"). As long as you use one of Google's features: Search, Gmail, Google Docks, or whatever, a new container is assigned.
However, Docker is built on the basis of LXC. As with any container technology, it has its own file system, storage system, processor, and memory components, as far as the program is concerned. The difference between a container and a virtual machine is that the hypervisor abstracts the entire device, and the container simply abstracts the operating system kernel.
This, in turn, means that the one thing a hypervisor can do is to use a different operating system or kernel. So, for example, you can use Microsoft Azure to run instances of both Windows Server2012 and SUSE Linux Enterprise Servers. As with Docker, all containers must use the same operating system and kernel.
On the other hand, if you just want as many server application instances to run on as little hardware as possible, you may not be concerned about running more than one operating system virtual machine. If multiple copies of the same application are exactly what you need, then you will like the container.
The move to Docker is expected to save tens of millions of of dollars in power and hardware costs per year for data centers or cloud service providers. So it's no wonder they used Docker as quickly as possible in swarm.
Docker brings a few new features that were not previously in the technology. The first is that Docker makes it easier and more secure to deploy and use containers than the previous approach. In addition, Docker has partnered with Giants in other container areas, including canonical, Google, Red Hat and parallels, to jointly develop their key open source components Libcontainer, which brings a need for the standardization of containers.
At the same time, the vast majority of developers can use Docker to encapsulate, deliver, and run any application, which is a lightweight, portable, self-sufficient LXC container that can be run from anywhere. As Bottomley told me, "containers give you immediate application portability." ”
"Enterprise organizations strive to make applications and workloads easier to migrate and distribute in an efficient, standardized, repeatable way, and sometimes it's difficult to do that," said Jay Lyman, a senior analyst at 451 Research, a market study firm. Just as GitHub facilitates collaboration and innovation by sharing source code, Docker Hub, official repos, and business support are helping many organizations address this challenge by improving the way they encapsulate, deploy, and manage applications. ”
Last but not least, Docker containers are easy to deploy to the cloud. As Ben Lloyd Pearson wrote on opensource.com: "Docker has a special way to integrate into most devops applications, including puppet, Chef, Vagrant and ansible, or can be used on their own to manage the development environment. The main selling point is that it simplifies many tasks that are typically performed by other applications. Specifically, with Docker, people can build a local development environment that is identical to the Active server, run multiple development environments from the same host (each development environment has unique software, operating systems, and configurations), test the project on new or different servers, and allow anyone to work on the same project with the same settings, regardless of the local host environment. ”
In short, what Docker can do for you is that it allows a larger number of applications to run on the same hardware than other technologies, makes it easy for developers to quickly build containerized applications that can run at any time, and greatly simplifies the task of managing and deploying applications. In a word, I can understand why Docker suddenly gets red as an enterprise-class technology. I just hope it doesn't live up to expectations, or there will be some worried CEOs and CIOs outside.
English: http://www.zdnet.com/what-is-docker-and-why-is-it-so-darn-popular-7000032269/
What exactly is Docker? Why is it so hot!