Reprint: http://www.csdn.net/article/2015-02-11/2823925
absrtact: When Docker was unknown, CoreOS founder Alex foresaw the value of the project and made it the first application isolation solution supported by CoreOS. This article will focus on how to properly manage Docker containers in CoreOS in a specific scenario.
Note: This article is starting in csdn, please indicate the source of the reprint.
"Editor 's note" in the previous articles of the "Walk Cloud: CoreOS Practice Guide" series, ThoughtWorks's software engineer Linfan mainly introduced CoreOS and its related components and uses. Speaking of CoreOS, I had to mention Docker. When Docker was still unknown, CoreOS founder Alex, with his keen intuition, foresaw the value of the project and made Docker the first application-isolation solution supported by the system. This article will focus on how to properly manage Docker containers in CoreOS under specific scenarios.
Author Profile:
Linfan, an IT siege lion born in Gen Y, thoughtworks Chengdu Office Cloudops team member, usually likes to study devops-related applications in his spare time and is currently preparing for AWS Certification and promotion of Docker-related technologies.
This time the protagonist finally turn to the big whale Docker. I don't know how many people are because Docker knows CoreOS, at least its popularity in the community is actually higher than the COREOS project itself. Docker is not explained in depth in this article, but is focused on the basics of getting started with Docker and the recommended practices for using Docker hosting services in CoreOS.
Become attached
Reixi The Lord said, "Stand on the Tuyere, and the pig can fly." It was the wind that made it through the cloud that enabled Docker to fly into the sky. Along with the rise of Docker and application containers, the development of a number of PAAs products was pulled, and CoreOS also borrowed the strength to earn a lot of popularity. At the same time, CoreOS's maturity is giving back to the Docker community, bringing many new vitality to the community such as ETCD, Deis (private PAAs cloud platform, currently based on CoreOS).
Speaking of the origins of CoreOS and Docker, there is indeed a history. The story begins in February 2013, when the American DotCloud company released a new Linux container software Docker and set up a website to publish its first demo version (see Docker's first official blog). And almost at the same time, in March 2013, California, the young handsome boy Alex Polvi is in his garage to start his second venture. His first start-up company, Cloudkick, was sold to cloud computing giant Rackspcace, the owner of OpenStack.
With the first bucket of gold, Alex is going to do a big one. He plans to develop a Linux distribution that will subvert the traditional server system. To provide the ability to steadily and seamlessly upgrade from any operating system version to the latest system, Alex urgently needs to address the coupling between the application and the operating system. As a result, the obscure Docker container caught his eye, and, with a keen intuition, Alex foresaw the value of the project, making Docker the first set of application isolation programs supported by the system. Soon after, they set up an organization named after their own system distribution: CoreOS. The decision to use Docker turned out to be a big part of the CoreOS ecosystem.
Now it seems that CoreOS is not the only operating system preinstalled with Docker, but it is the first and the most successful one now. Redhat and Canonical (Ubuntu's parent company) followed up with their own pre-installed Docker system releases, but were unaware of the climate. Its Project initiation time (from Chengdu ThoughtWorks Technology radar sharing activity), atomic and Ubuntu Core Snappy are Redhat and canonical company launched the pre-installed Docker operating system, Targets are also directed toward server clusters and containerized deployments.
Application container
The "Application Container" is no stranger to many people now. But it's not that popular on the server system at least compared to the smartphone system you have on hand. So far the popular installation software on the server system is still compiled source code, manual installation package or a variety of package management tools, although the emergence of package management tools to solve the application software installation, uninstallation and self-reliance and many other issues, but can not be a good solution to the conflict between software dependencies. Before the advent of Docker, the concept of "sandbox" has been widely used in mainstream mobile phone systems such as Android and iOS. By separating the sandbox, the application software packs all of its dependencies with the application itself and accesses the operating system in a controlled manner provided by the SDK API, which greatly reduces the coupling between the software and the system. The direct benefit is that the dependency conflict between software is well resolved, removing an application typically takes only a few seconds and is completely non-marking, and the security of the software access system is more manageable.
In fact, Android implements the same Linux kernel-based cgroup and namespace mechanism used to restrict and isolate the use of resources, using the same technology as Docker. These new features, which have been added since the Linux 2.6.x version, have been tested for a long time and proven to be feasible and reliable.
When CoreOS meets Docker
This article does not specifically cover the use of Docker, but rather focuses on how to properly manage Docker containers in CoreOS under specific scenarios. Having seen the role of Docker in the CoreOS ecosystem, the following example of running Nodejs and MongoDB in two containers shows how to manage services through SYSTEMD in CoreOS, and then quickly browse through some basic Docker commands.
- Docker image of the authoring service
Service mirroring there are images of standard services that can be readily available, such as MongoDB services. Others need to be customized, and making Docker image files can typically generate two of methods from dockerfile or existing container instances. The former is a relatively recommended practice, but the need to learn dockerfile writing, has gone beyond the scope of this series. The latter is relatively simple, but not conducive to the later image Maintenance management, here is only for demonstration purposes, so this method is used.
First, pull the base image
Each specific container is actually run in a virtual independent space, and it is designed to only access other files that exist under the same virtual space. So in order for the application to use basic runtime dependencies, some Linux commands and configuration files also need to be packaged in virtual space, and this packaged collection of dependent files is mirrored.
The way to operate Docker is similar to Systemctl, Etcdctl, which requires a two-level command to form a complete command. The network address that can be specified by the Docker pull command is pulled to the local image (if the name is specified instead of the network address, it is searched in the Docker's official mirror repository, such as the following two examples).
$ docker Pull node:latest ... status:downloaded newer image for Node:latest $ docker pull mongo:latest ... status:downloaded newer image for Mongo:latest
The image is named in the address/Mirror name: Version label format, where the mirror name is required and the default is the official warehouse address if the address part is empty. If the version label portion is empty, for newer Docker versions (approximately 1.3.x later), only the version labeled latest will be downloaded, and earlier versions of Docker will download all versions of the specified image, often accidentally downloading many of the unwanted image versions.
After a large output, if all goes well (the fact is that it may not be going too well at home), the local Docker can already use both the Nodejs and MongoDB pre-loaded mirrors. Can be verified by the Docker images command.
$ docker Images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE node latest 61afc26cd88e 3 days ago 696.2 MB m Ongo latest 59b3d123f9b8 6 days ago 392.4 MB
...
In some parts of the country, the image of the official image warehouse may fail (perhaps the credit of a famous firewall). At this point, you can use the domestic third-party open-source image warehouse, such as Dockerpool or docker.cn provided image files. The former needs to configure the local SSL certificate, otherwise you will encounter "Error:invalid registry endpoint" error, slightly troublesome. The latter can be used directly:
Docker pull Docker.cn/docker/node:latest Docker pull Docker.cn/docker/mongo:latest
Second, make custom image
MongoDB can use the official Docker image directly. The Nodejs container also needs a little customization, which should be deployed into the container and then generate a new mirror. Again, the best way to make a mirror is to write a dockerfile to visualize the infrastructure. The following methods of modifying an existing mirror are generally used for illustrative purposes only.
Next we will start the container instance of MongoDB and Nodejs separately and expose the MongoDB port to the Nodejs container.
First, start a MongoDB container instance named Mongo-ins. The command to start the container is Docker run, in addition to running configuration parameters such as--name 、--Port, the last two parameters of this command are the image names used by the instance, and the commands that the instance itself needs to run. Some containers have been configured to run the default program, at this time a parameter can be omitted, such as the following example.
The parameter-D means that after running directly into the background, a string of output that is echoed on the screen is the ID of the new boot container instance.
It then launches a Nodejs container instance, uses the official node image as the base image, and establishes a "connection" to the Mongo-ins instance. This container instance is named Node-app.
$ docker Run--name node-app-p--link mongo-ins:mongo-it node/bin/bash [email protected]:/# <-has entered the container B Ash>
-it is actually a handy way to-i-t, which means enabling interactive mode and enabling the display terminal so we can go into the container and do some manual work. While parameter--link is used to correlate two containers, the use of Docker link can refer to the relevant documentation for Docker. In a nutshell, Link's parameter, Mongo-ins:mongo, means that the container mongo-ins is introduced into the container image being established and is referred to as MONGO. The result of this is that in the new Node-app container instance, you have access to two global environment variables: $MONGO _port_27017_tcp_addr and $MONGO _port_27017_tcp_port, respectively, that are used to access MongoDB's IP address and Port.
As a demonstration, we'll deploy a simple example from GitHub in the container.
$ git clone https://github.com/ijason/NodeJS-Sample-App.git $ cd/nodejs-sample-app/employeedb $ sed-i-E "s/27017/ process.env.mongo_port_27017_tcp_port/"-e" s/' localhost '/process.env.mongo_port_27017_tcp_addr/"app.js $ exit
The third command above changes the MongoDB location specified in the original container to the IP address and port exposed from the other container. At this point the Node-app container has already deployed a sample application called Employees, which then generates the image and puts it on each node of the cluster.
Third, generate and submit the mirror
In order to provide scale-out for services in a container in a cluster, a custom container needs to be shared across all nodes of the cluster.
First you need a place to store the shared image, a private mirrored warehouse can be used in an enterprise environment, but for the sake of simplicity, we use Docker's public repository directly. You first need to register a user at the Docker hub and then log in to the warehouse server using the Docker login command.
$ docker Login Username:linfan password:email: [Email protected]******.com Login succeeded
Then we need to use the Docker commit command to generate a local mirror for the locally modified container. Note that since the image must then be submitted to the Docker hub, the name of the mirror should be prefixed with its own Docker hub user name, otherwise it will encounter 403 "Access Denied:not allowed to create Repo in the later push At given location "error. For example, the name is Linfan/employees.
$ docker Commit Node-app Linfan/employees a4281aa8baf9aee1173509b30b26b17fd1bb2de62d4d90fa31b86779dd15109b $ Docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE linfan/employees latest a4281aa8baf9 696 seconds ago .2 MB
Finally, use the Docker Push command to submit this prepared image to the Docker hub repository.
$ docker Push linfan/employees The push refers to a repository [linfan/employees] (len:1) Sending image list ... Pushing tag for Rev. [5577d6743652] on {https://cdn-registry-1.docker.io/v1/repositories/linfan/employees/tags/latest }
Once the commit is complete, you can use the Docker pull command to get the image on the other node.
Note: Strictly speaking, exposing the database Services container through Docker link to the Application service container does not conform to the 12 guidelines for distributed applications, because the two containers connected through Docker link must be running on the same physical host. Data and applications cannot be deployed individually or horizontally in a cluster.
- To start a service container using Fleet
I. writing a Unit file
With the corresponding service container, the method of correctly starting the service in CoreOS should be managed by fleet. By using the X-fleet configuration of Unit properly, it can solve the problem that the container is directly interdependent.
Using vagrant ssh into a CoreOS Shell, create the following two service Unit files.
First, Mongo.service.
[Unit] description=general MongoDB service after=docker.service [service] timeoutstartsec=0 execstart=/opt/bin/ docker-run.sh--name mongo-ins-d MONGO execstop=/usr/bin/docker stop Mongo-ins
Then Employees.service, note the contents of its Unit and x-fleet segments. The Mongo.service service must be started before the unit is specified for the service to start, while the X-fleet segment specifies that it needs to run on the same service node as Mongo.service.
[Unit] description=employee information Management service after=docker.service after=mongo.service [service] Timeoutstartsec=0 execstart=/opt/bin/docker-run.sh-p 3000:3000--link mongo-ins:mongo-d--name node-app node-app node/ Nodejs-sample-app/employeedb/app.js execstop=/usr/bin/docker Stop mongo-ins [X-fleet] X-ConditionMachineOf= Mongo.service
The two Unit files above use a/opt/bin/docker-run.sh script to replace the Docker Run command. The script needs to be created and placed under the/opt/bin directory, to detect if a container with the same name is already running, and if not, execute the Docker Run command directly using the Docker Start command to start the existing container. The contents are as follows:
#!/bin/bash para= "${*}" name=$ (echo "${para}" | grep ' \-\-name ' | sed ' s/.*--name \ ([^]*\). */\1/g ') if ["${name}" == "" ]; Then echo "[ERROR] must specify a name to the container!"; exit-1; Fi exist=$ (sudo docker ps-a | grep "${name}[]*$") If ["${exist}" = = ""]; Then sudo docker run ${para} else sudo docker start ${name} fi
Second, start the service
Starting the service with the Fleetctl command, the specific usage has been introduced in the previous section of the series.
fleetctl start/mongo.service fleetctl start./employees.service
Here for a simple direct use of the Fleetctl Start command, the more recommended starting service method, please refer to the series of the fleet of an article.
In this step, the service deployed in the container is ready to use. The following page is opened from the external Access server's port 3000 and the employee information is added to the database in the MongoDB service.
- Manage container Run state
Finally, take a look at some Docker commands for detecting the container's operational status and daily administration.
First, view the running log
After the container is run in the background through the-d parameter, the log contents of the service output can be seen through the Docker logs command.
$ docker Logs mongo-ins MongoDB starting:pid=1 port=27017 dbpath=/data/db 64-bit host=d9bba1bfc8be ...
Ii. List of container instances
Command Docker PS to list the basic information for all currently running containers.
$ docker PS CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9bba1bfc8be mongo:2 "/entrypoint.sh" 4 Minutes ago up 4 minutes 27017/tcp mongo-ins 22de21d77174 node:0 "/bin/bash" 3 minutes ago up 5 minutes Node-app ...
Third, container instance details
Use the Docker inspect command to view detailed run information for a specified container.
$ docker Inspect Mongo-ins {...}
Iv. backing up and restoring containers
Simply put, the command to package the backup and restore of an existing local image is Docker save and Docker load. Container instances can also be packaged directly, with the relevant commands Docker export and Docker import, and note that import will restore the backed up data to a new local mirror instead of a container instance.
The use of these two commands can be referenced in the documentation. Just an additional question, why do you need two restore commands, since both restores restore the contents of the backup to the container? The reason is that the packaging effect generated by using save and export is not the same, simply that the backup generated by export discards all the mirrored hierarchies, and the backup generated by Save does not. Mirroring the hierarchy helps reduce the space required for similar mirrored local storage, which can be consulted in detail.
These commands are just the tip of the iceberg for Docker's power, and there are many excellent Docker tutorials available on the web as a great way to learn about Docker and application containers. A dockerone-translated Docker series article is recommended here.
[CoreOS Reprint] CoreOS Practice Guide (vii): Docker Container Management Service