Docker fire for a long time, the previous period of time simple study and play a moment, found that he is valuable to test, feel the need to further study.
Here are some good learning URLs to use as a reference:
Infoq has a series of articles above:
What Docker is doing on Infoq: http://www.infoq.com/cn/dockerdeep/
In Layman's Docker (i): Docker core Technology Preview: http://www.infoq.com/cn/dockerdeep/
In Layman's Docker (ii): Docker command line Quest: Http://www.infoq.com/cn/articles/docker-command-line-quest
In Layman's Docker (iii): Docker open Source Road: Http://www.infoq.com/cn/articles/docker-open-source-road
The way Docker (iv): Docker's integration Test deployment: Http://www.infoq.com/cn/articles/docker-integrated-test-and-deployment
In Layman's Docker (v): Build a development environment based on FIG: http://www.infoq.com/cn/articles/docker-build-development-environment-based-on-fig
In layman's Docker (vi): Deploy your app like Google: Http://www.infoq.com/cn/articles/deploy-your-application-like-google
Docker Source Code Analysis (a): Docker Architecture: Http://www.infoq.com/cn/articles/docker-source-code-analysis-part1
Docker source Analysis (ii): Docker client creation and command execution: Http://www.infoq.com/cn/articles/docker-source-code-analysis-part2
Docker source Analysis (iv): Newdaemon implementation of Docker daemon: HTTP://WWW.INFOQ.COM/CN/ARTICLES/DOCKER-SOURCE-CODE-ANALYSIS-PART4
Docker Source Analysis (v): Docker server creation: HTTP://WWW.INFOQ.COM/CN/ARTICLES/DOCKER-SOURCE-CODE-ANALYSIS-PART5
Docker source Analysis (vi): Docker daemon Network: HTTP://WWW.INFOQ.COM/CN/ARTICLES/DOCKER-SOURCE-CODE-ANALYSIS-PART6
Read 2014 of the Docker chapter: Talent, Courage, luck: http://www.infoq.com/cn/articles/2014-review-docker
--------------------------------------------------------------------------------------------------------------- --
Simply put, what is Docker?
Docker's English is intended to be a wharf worker, that is, a porter, the Porter is carrying containers (Container), the container is not loaded with goods, but any type of App,docker the app (called payload) in the Container, The Linux container Technology package will transform the app into a standardized, portable, self-managing component that can be developed, debugged, and run on your laptop, and ultimately very easy and consistent to run in a variety of cloud rooms and servers in a production environment.
Docker's core underlying technology is LXC (Linux Container), where Docker adds a thin layer, adding a lot of useful functionality.
Docker provides a portable configuration standardization mechanism that allows you to consistently run the same container on different machines, while the LXC itself may not be easily ported because of the different configurations of different machines;
Docker is app-centric and has a lot of optimizations for application deployment, while LXC's help script focuses on how the machine starts up faster and consumes less memory;
Docker provides an automated build mechanism (Dockerfile) for apps, including packaging, infrastructure dependency management and installation, and more;
Docker provides a git-like version of the container that allows you to version-manage the containers you've created, and you can also download container created by others and even merge like Git.
Docker container is reusable and relies on versioning mechanisms, and you can easily reuse someone else's container (called image) as the base version to expand;
Docker container is shareable, a bit like GitHub, Docker has its own index, you can create your own Docker users and upload and download Docker Image;
Docker provides a number of tool chains that form an ecosystem that is targeted for automation, personalization, and integration, including support for PAAs platforms;
What's the use of Docker?
From an operational perspective, your application generally requires a specific version of the operating system, application Server, JDK, database server, and may need to adjust configuration files and other dependencies. The application may need to bind to a specified port and a certain amount of memory. The components and configurations required to run the application are the stated application operating systems. Of course you can write a setup script that includes downloading and installing these components. Docker simplifies this process by creating a mirror (image) that contains the application and infrastructure, which is managed as a component. These mirrors can create Docker containers (container) that run on the container virtualization platform provided by Docker.
Docker composition
Docker has two main components:
Docker: Open source container virtualization platform
Docker Hub: Saas platform for sharing and managing Docker images
Docker uses Linux containers to provide isolation, sandbox, replication, resource throttling, snapshots, and other benefits. Mirroring is the "building component" of Docker and is a read-only template for application operating systems. A container is a running state created from a mirror and is the "running component" of Docker. Containers can be run, started, stopped, moved, and deleted. The repository that the image is stored in is the "distribution component" of Docker.
Docker mirroring and Containers
Docker consists of two components in a boot order:
Server: Running on a host, responsible for building, running, and distributing Docker containers and other important tasks
Client: Docker binaries, receive user commands and service programs to communicate
clients can run on a host with the server, or on different hosts. The server needs to pull a mirror from the repository by pulling commands. The server can download the image from the Docker Hub or other configured repositories. Server-side hosts can download and install multiple mirrors from the warehouse. The client can then use the Run command to start the container. The client communicates with the server through the socket or rest API.
Docker installation
Installing Docker in CentOS:
sudo yum-y install docker-io #安装 Docker
sudo service Docker start #启动 Docker Services
sudo chkconfig docker on #如果需要 Docker service is self-booting
To install Docker in Ubuntu/debian:
sudo apt-get udpate
sudo apt-get install Docker.io
sudo ln-sf/usr/bin/docker.io/usr/local/bin/docker
sudo sed-i ' $acomplete-F _docker Docker '/etc/bash_completion.d/docker.io #命令自动补全
Other operating systems can be installed to view official documents.
Docker Run and exit
After understanding the concept of image and container, we can start to download a image,docker the benefit is to provide a github-like image Warehouse management, you can easily pull someone else's image down to run, for example, We can download a CentOS Image:
sudo docker pull Centos:centos6
Here centos6 is a tag, similar to Git's tag, that can be used to determine the version of the downloaded CentOS. After the download is complete, execute the Docker images command to list the images you have downloaded.
After downloading, we run a container from the command line, and the command is simple, for example, we want to execute a shell terminal:
sudo docker run-i-T Centos:centos6/bin/bash
By default, the Docker container does not provide an interactive shell, nor does it provide standard input. You can specify the-i option to provide interaction and provide the-t option to assign a pseudo-terminal.
In the Shell you can do whatever you want to do, install software, write programs, run commands, and so on. When you want to save the results after you have done this, you can use the Docker commit command to submit the Container as an Image. Well, if you're still in an interactive shell, remember to use the Ctrl+d or exit command to exit.
sudo docker ps-a
First execute the PS command to view the container ID
Then use the commit command to save the container
sudo docker commit 851d Custom/centos-aliyun
After the container commits, execute sudo docker images to see the container that was just submitted.
Docker Port Mapping
It is often necessary to open some network services in Docker, and you need to connect the network port of the Docker virtual machine to the host port. For example, 8080 ports in Docker are mapped to the host's 80 port:
sudo docker run-p 80:8080 custom/tomcat
Host hard Drive Mount
This is also one of the common functions, especially when the service needs to log, save files and so on.
sudo docker run-i-t-v/host/dir:/container/path Ubuntu/bin/bash
The above is to mount the host machine's/host/dir to the/container/path path.
Shared storage between containers
Mainly with the help of-volumes-from parameter realization
couch1=$ (sudo docker run-d-v/var/lib/couchdb shykes/couchdb:2013-05-03) couch2=$ (sudo docker run-d-volumes-from $COUC H1 shykes/couchdb:2013-05-03)
This feature gives people a lot of imagination, such as a container instance for Web Storage and two additional instances for Web requests for read-write separation.
Import/export of mirrors
Method 1: Use the Save/load command to implement a mirrored import export
sudo docker save IMAGENAME | Bzip2-9-c>img.tar.bz2 #或者你喜欢 tar.gz
sudo docker save IMAGENAME > imageName.tar.gz
The image Import feature uses the load command to extract the import
sudo docker load < imageName.tar.gz # likes to compress classmates bzip2-d-C
Method 2:push/pull to push the image file to the Docker hub. This approach is similar to Git. You can create your own public or private library on the Docker Hub for remote sharing. The disadvantage is that sometimes the image file is particularly large and needs to be considered for network bandwidth issues.
Method 3: Set up your own repository and submit the image to the web for the Enterprise network.
Dockerfile
Step-by-step installation in the Shell scripting environment, inefficient and tiring. Docker can automate the creation of a Docker image script through custom Dockerfile, making it easy to share and easy to modify and templating.
# VERSION 1.0.0
# default CentOS, can be changed to any image you need from CentOS
# signature Maintainer Wupher "[email protected]" RUN Echo ' We are running some
# of cool things ' run Yum update run yum install-y openssh-serverrun mkdir-p/var/run/sshd# set root ssh telnet password RUN echo "Roo t:123456 "| Chpasswdrun Yum install-y mysql-serverrun yum install-y java-1.7.0-openjdk# install tomcat etc...
# mount the hard drive for saving Logvolume ["/var/log/", "/var/volume2"]
# container Open 22 Port expose 22
# container Open 8080 port expose 8080
# Set up tomcat initialization run, SSH Terminal Server Run as background, so when Docker run image, these services will automatically start entrypoint service tomcat start &&/usr/sbin/ Sshd-d
The complete Dockerfiler can be consulted in the official documentation.
Application Scenarios
Docker currently has the following application scenarios:
Test:Docker is a great fit for test releases, and the Docker package can be delivered directly to testers, eliminating the need for testers to work with operations, development, and environment to build and deploy.
Test data separation: In the test, often due to the test scenario transformation, you need to modify the dependent database data or empty the change memcache, Redis cache data. Docker is lighter and more convenient than traditional virtual machines. It is easy to separate the data into different mirrors and switch at any time depending on your needs.
Development:The developer uses the same Docker image together, and the modified source code is mounted to the local disk. No longer because of the different circumstances caused by different program behavior and the brain, while the newcomer to the post can also quickly build development, compile environment.
PaaS Cloud Services:Docker can support command line encapsulation and programming, through automatic loading and service self-discovery, it is convenient to extend the services encapsulated in the Docker image into a cloud service. Services such as the DOC conversion preview are encapsulated in the mirror, increasing and decreasing the number of containers to run on demand as the business request grows.
Use Docker to perform step-by-part clustering simulations
Existing defects(As of November 2014)
Unable to modify the hosts file, do not own domain name resolution. A common method is to install DNSMASQ.
The VM's system time is UTC time, seemingly no way to modify it. The method is not no, the most common way is to map the host machine's/etc/localtime to the mirrored/etc/localtime:ro up. However, this only makes the mirror consistent with the host machine's time zone, and if you want different mirrors to use different time zones, you can adjust the time zone automatically with cmd or entrpoint commands each time you start.
There is no way to change Container IP to static IP, and the IP may change when you restart the container.
Restricted to LXC, the perimeter must be Linux, and the kernel version must be greater than 2.6.27.
Memory dumps and run state exports are not supported at this time.
1.Docker application scenario?
For enterprise users: Although things are good, I do not use, close my hair. In other words, you must be useful to yourself to be able to be used.
When it comes to this, Docker was released in March 2013 by a company called DotCloud, and DotCloud is a PAAs provider, and in Docker's blog, Docker positioned itself as "an open platform for distributing applications." "The typical application scenario for Docker is also explicitly mentioned on its website:
-automatic packaging and deployment of applications (automating the packaging and deployment of applications)
-Create a lightweight, private PAAs environment (Creation of lightweight, private PAAs environments)
-Automated testing and continuous integration and deployment (automated testing and continuous integration/deployment)
-Deploy and extend Web apps, databases, and back-end services (deploying and scaling web apps, databases and backend services)
ThusThe purpose of Docker is to enable users to quickly deploy a large number of standardized application environments in a simple "container" manner ., so as long as this kind of demand, Docker is more suitable.
can 2.Docker replace virtual machines?
Some radical comments claim that Docker will be the terminator of existing virtual machine technology, and that the comments are somewhat exaggerated. Docker is application-oriented and the ultimate goal is to build PAAs platforms, and the main purpose of existing virtual machines is to provide a flexible pool of computing resources, which is architecture-oriented, with the ultimate goal of building an IaaS platform, or SDDC.
Therefore, there is no direct conflict between the two, each of the things, just before because of the container technology is not mature, virtual machine technology temporarily rob a part of the application-oriented use requirements, with the future development, these applications will gradually turn to Docker camp.
And the two complement each other. Docker's old club DotCloud's PAAs service is based on Amazon's AWS services, so the virtual machine is the soil of Docker, and Docker presents the business to the user.
can 3.Docker meet the operational needs of the enterprise?
Business-to-operational requirements are primarily about stability, manageability, and three areas of high availability and recoverability.
In terms of stability, Docker released its 1.0 release on June 10, calling it a "milestone" and claiming that "1.0 's release indicates that a level has been raised in quality, functional integrity, background compatibility, and API stability to meet enterprise IT standards." But until then, DotCloud has warned users "not to run Docker in a production environment," in Rhel 7, The version of Docker is 0.11.1, which is the RC version prior to the release of 1.0, although Red Hat will update the later Docker updates and patch fixes to version 0.11, however, enterprise customers will still have to bear a small stability risk when using such a newer version of the software. In the software version selection specification for many enterprise customers, there is a "need to adopt a stable version of more than half a year has been released" requirements.
Manageability, the enterprise's IT operations personnel need to use the software has a good visual management capabilities, and has a viable monitoring means.
The current centralized management of Docker mainly includes Dockerui, Dockland, shipyard and so on, among which shipyard maturity and activity are the best.
Docker's main role is the release and operation of the application, however, it seems shipyard in the management of application is still very rough, and the whole management idea is not application-centric, which may give the enterprise in the centralized management of Docker, brought a certain "trouble."
The main purpose of the monitoring is to quickly understand the system, the operation of the Jiankang condition, the risk status of alarm, in this regard, Docker is relatively scarce, but also require the enterprise for the relevant environment to customize the monitoring implementation.
In terms of high availability and recoverability, we know that there are only core business, non-core business, no "non-critical business" in the enterprise, any business needs to be highly available, so the enterprise business platform has to consider three things: local high availability, data backup, remote disaster recovery.
When using Docker, it may be necessary to consider a different perspective, advocating "stateless" applications in Docker scenarios, that is, business data is stored only at the data tier, and the business layer does not care about any data. the high availability of the business layer can be achieved through rapid redeployment, with the data tier still in traditional mode or in a traditional way for high availability and recoverability. But it takes time to explore and validate the scheme, and its viability and reliability take time to prove.
Docker to large-scale enterprise environment application There is still a lot of way to go, but the convenience of it is still not to be underestimated, it will be a revolutionary change, "insufficient" in other words is "opportunity", which requires a lot of business partners around Docker to launch the corresponding solution, And the openness of Docker has brought great convenience to this effort.
For enterprises, Docker is very convenient for development, testing team, and development, testing environment on the shortcomings discussed above is not concerned about, so, in the development, testing team bold promotion, use Docker can undoubtedly get great benefits.
Of course, there is a real-world application case: "A better dev/test experience: Running Docker on AWS": http://www.csdn.net/article/2014-10-10/2822038
The final end
Docker is now getting more and more attention, and a few days ago there were rumors that Microsoft was going to buy Docker. But what the final result is, it's unclear.
Of course, regardless of future Docker, I believe that with the efforts of the open source community, and everyone in the application process optimization and improvement, eventually Docker will become more and more popular in the actual production environment, and the actual integration will be more and more close, in the actual application will also solve more practical problems, to play a greater value!
About the application of Docker in testing