This is a creation in Article, where the information may have evolved or changed.
Dockerfile Best Practices
Why
As a very good lightweight PaaS solution, Docker has been supported by the mainstream cloud service platform, with the Docker Registry Hub providing high-quality Docker Image and Fig for Containers management, attracting global developers to Migrate your services to Docker Containers to improve development and deployment efficiencies. As a rapid development of new technology, a lot of high-quality technical articles and use of experience are in English, I myself in the process of learning Docker, in different websites and personal Blog have seen a lot, so, think of me to see the high-quality articles translated over this Blog, On the one hand, the process of translation is to deepen the understanding of the process, on the other hand scattered throughout the good articles in a place and translated into Chinese, easy to share and review.
What
The translation of this part is translated from Dockerfile best practices, the English copyright belongs to the original author, please specify the source.
President
Dockerfile provides a simple syntax for creating Docker Image, and this article documents some of the techniques and experiences that allow us to really use good Dockerfile .
At the beginning of the Dockerfile, keep the generic creation instructions to take advantage of the cache
Each bar in the Dockerfile indicates that the result of the execution is submitted to the newly created Image and is the basis for execution as the next instruction. If there is already a Dockerfile with the same parent image and the image has the same indication (except for ADD), another image,docker will use the existing image instead of executing each dockrfil The instructions in E to create another new Image, which is the caching mechanism.
In order to be able to effectively use the cache, we need to maintain dockerfiles consistency, and put different instructions in the last part of Dockerfile, all my own dockerfiles start with the following 5 lines:
FROM ubuntuMAINTAINER Michael Crosby <michael@crosbymichael.com>RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.listRUN apt-get updateRUN apt-get upgrade -y
Note that changing the maintainer instruction will force Docker to discard the use of the cache and re-instruct the operation by executing run.
Use the- T parameter to Tag the image when creating the Docker Image
If we're not just experimenting with Docker, we should Tag the image with the- T parameter when creating the Docker Image, because a meaningful tag name can help us to explicitly create the image.
Docker build-t= "Crosbymichael/sentry".
Do not port mapping in Dockerfile
The two most central features of Docker are repeatability and portability. The Image should be used as needed to create multiple Containers on any host, considering this, although we can map the Container port to the host port in Dockerfile, but we should never do so in Dockerfile, no Then we can only create a Container using the Image created by the Dockerfile.
# private and public mappingEXPOSE 80:8080# private onlyEXPOSE 80
If the user of Image is concerned that Container should map its port to the port of host, it will be explicitly set with the- p parameter when creating Container, otherwise Docker will automatically give Container With a port on the host to map.
Use data to pass in parameters when using CMD and entrypoint instructions
The functions of CMD and entrypoint instructions are easy to understand, but they have a hidden pit that can cause errors if not noticed. Both of these instructions support the following syntax:
CMD /bin/echo# orCMD ["/bin/echo"]
It looks like nothing, but the pits hidden in the details may really deceptive. If we use the following method of passing parameters as an array, the final result will be the same as we expected, but when we use the first syntax, because Docker automatically adds /bin/sh-cto the command that needs to be executed, This situation can lead to unexpected errors and poorly understood things, so it is a good idea to always use arrays to pass parameters, which ensures that the commands are executed in the way we expect them to.
CMD and entrypoint are best used together
entrypoint will make our Container work like an executable file, and we can pass parameters to entrypoint when we use Docker run to create Container. Execute without worrying about the operation being overwritten (as in the case of using CMD ). entrypoint can even provide a better experience when used in conjunction with CMD , and we'll look at how to use it with a rethinkdb example:
# Dockerfile for RETHINKDB #/HTTP Www.rethinkdb.com/FROM ubuntumaintainer Michael Crosby <michael@crosbymichael.com>run echo "Deb/HTTP Archive.ubuntu.com/ubuntu Precise main Universe ">/etc/apt/sources.listrun apt-get updaterun apt-get Upgrade-yrun AP T-get install-y python-software-propertiesrun add-apt-repository ppa:rethinkdb/pparun apt-get UpdateRUN apt-get Install-y rethinkdb# rethinkdb processexpose 28015# rethinkdb admin consoleexpose 8080# Create the/rethinkdb_data dir St RUCTURERUN/USR/BIN/RETHINKDB createentrypoint ["/usr/bin/rethinkdb"]cmd ["--help"]
That's all we need to do to move rethinkdb to Docker. The first 5 lines are the standard start of my own Dockerfile, followed by some installation and necessary port exposures, and finally using entrypoint. We know that when we use the Image created by the Dockerfile to create a Container and run, all parameters passed through the Docker run command will be passed to entrypoint (/usr/bin/ RETHINKDB). However, you can see that the entrypoint indicates that there is a CMD indicator below and has help content, so that when we create the Container using Docker run The RETHINKDB Help content is displayed when no parameters are passed, like this:
docker run crosbymichael/rethinkdb
The output is:
Running ' RETHINKDB ' would create a new data directory or use an existing one, and serve as a RETHINKDB cluster node. File path Options:-d [--directory] path Specify directory to store data and metadata--io-threads N How many simultaneous I/O operations can happen at the same timemachine Nam E options:-n [--machine-name] arg the name for this machine (as would appear in The metadata). If not specified, it'll be randomly chosen from a short list of names. Network options:--bind {all | addr} Add the address of a local interface to listen on when accepting connections; Loopback addresses is enabled by default--cluster-port port port for R eceiving connections from and nodes--driver-port Port port for RETHINKDB protocol CLIENT drivers-o [--port-offset] offset all ports used locally would have the This value Added-j [--join] Host:port host and port of a RETHINKDB node to connect to ..... .....
Now let's see what happens if we pass the parameter "–bind all":
docker run crosbymichael/rethinkdb --bind all
The output is:
Info:running rethinkdb 1.7.1-0ubuntu1~precise (GCC 4.6.3) ... info:running on Linux 3.2.0-45-virtual x86_64info:loading D ATA from Directory/rethinkdb_datawarn:could not turn off filesystem caching for database file: "/rethinkdb_data/metadata (is the file located on a filesystem, doesn ' t support direct I/O (e.g. some encrypted or journaled file systems)?) This can cause performance Problems.warn:Could does turn off filesystem caching for database file: "/rethinkdb_data/auth_m Etadata "(is the file located on a filesystem, doesn ' t support direct I/O (e.g. some encrypted or journaled file Syste ms)?) This can cause performance problems.info:Listening for intracluster connections on port 29015info:listening for client D River connections on port 28015info:listening for administrative HTTP connections on port 8080info:listening on Addresse s:127.0.0.1, 172.16.42.13info:server Readyinfo:someone asked for the nonwhitelisted file/js/handlebars.runtime-1.0.0. Beta.6.js, if this sHould is accessible add it to the whitelist.
In this case, a complete rethinkdb instance can be interacted with as if it were a rethinkdb executable on the host, which is simple.
I hope this article will help you make the best use of Dockerfile to create your own Image and share it. I believe Dockerfile is an important reason why Docker is so easy to use.
Dockerfile Best Practices continued
--EOF--
- Understand the volumes→ in Docker
- ← Managing Flask and Nginx's development environment using Fig
Disclaimer: This article uses the BY-NC-SA protocol to authorize. Reprint please specify transfer from: Dockerfile Best Practice