8 Models for developing Docker containers

Source: Internet
Author: User
Keywords Cloud computing Docker

"Editor's note" Vidar Hokstad is very experienced in Docker use, especially in the absence of data loss, the use of Docker to create a repeatable build on the experience, in this blog, he summed up the development of the Docker container 8 modes.

The following is the translation:

Docker is now my favorite tool, and in this article I'll outline some of the patterns that recur during my use of Docker. I don't expect them to surprise you, but I hope this works for you, and I'd like to talk to you about the patterns I've encountered in using the Docker process.

The basis of all my Docker experiments is to keep the volume state unchanged so that the Docker container can be reconstructed without any loss of data.

All of the following dockerfiles examples are focused on creating a container where it can be replaced at any time without regard to other.

1. The Sharedbase Container (s)

Docker encourages "inheritance", which is also natural-a basic way to use Docker efficiently, not only because it helps reduce the time to build a new container, but Docker the advantages of it, it can cache intermediate steps, but it is also easy to lose the opportunity to share in an ambiguous situation.

Obviously, when migrating my various containers to Docker, the first thing to face is multiple steps.

For most projects that want to be deployed anywhere, there are a number of containers to create, especially if the project requires a long process, or a specific package, so the containers I want to run are becoming more and more numerous.

The important thing is that I'm thinking of trying to run "everything" on Docker (including my reliance on a few desktop apps) to make the MyBase environment completely discretionary.

So I soon began to extract my basic settings into the base container. This is my current "Devbase" Dockerfile:

From Debian:wheezy run apt update run apt-get-y install ruby Ruby-dev build-essential git run apt install-y Libopenssl-ruby Libxslt-dev Libxml2-dev # for debugging RUN apt install-y gdb strace # Set up i user RUN Useradd vidarh-u 10 00-s/bin/bash--no-create-home run gem install-n/usr/bin bundler RUN gem install-n/usr/bin rake workdir/home/vidarh/env Home/home/vidarh VOLUME ["filesystem"] USER Vidarh EXPOSE 8080

There is nothing to be specified here-it installs certain tools that need to be readily available. These may be different for most people. It is worth noting that if/when you rebuild a container, you need to specify a specific label to avoid surprises.

Use the default port 8080 because this is the port where I publish the Web app, which is what I use these containers for.

It adds a user to me and does not create a/home directory. I mounted a shared folder/home from the host binding, which leads to the next pattern.

2. The Sharedvolume Dev Container

All my dev containers share at least one volume:/Home for easy development. For many apps, in development mode, a file-system-change-based code-reloader operation is used to encapsulate os/distro-level dependencies in the container and to help validate app-as-bundled work in the initial environment Without having to reboot/rebuild the VM every time the code changes.

As for the others, I just need to reboot (not rebuild) the container to respond to changes to the code.

For test/staging and production containers, most of the cases do not share code by volume, instead using "add" to add code to the Docker container.

This is the dockerfile of my "homepage" Dev container, for example, containing my personal wiki, which exists under filesystem in the "Devbase" container, showing how to use shared base containers and filesystem volumes:

From Vidarh/devbase workdir/home/vidarh/src/repos/homepage entrypoint bin/homepage web

Here is Dev-version's blog:

From Vidarh/devbase workdir/user root # for GRAPHIVZ integration RUN apt update RUN apt-get-y install Graphviz ImageMagick USER Vidarh workdir/home/vidarh/src/repos/hokstad-com entrypoint bundle exec rackup-p 8080

Because they take code from a shared library and are based on a shared base container, these containers are typically rebuilt very quickly when I add/modify/delete dependencies.

Even so there are some places I am very willing to improve, although the above base is lightweight and most of them are still unused in these containers. Because Docker uses copy-on-write overlay, this does not cause a huge overhead, but it still means that I don't have the minimal resource consumption, or the chance to minimize attack or error.

3. The Dev Toolscontainer

This may appeal to those of us who like to rely on SSH to write code, but it's a little bit smaller for the IDE crowd. For me, one of the bigger benefits of the above setup is that it allows me to isolate the work of editing and testing execution code in a development application.

In the past Dev-systems was a chore for me, as Dev and production dependencies and development tool dependencies are confusing and easily produce illegal dependencies.

Although there are many ways to work around this, such as through regular test deployments, I prefer the following solution because it is possible to prevent problems in the first place:

I have a separate container containing the installation of Emacs and all the other tools I like, and I'm still trying to keep sparse, but the key is that my screen session can run in this container, plus the "autossh" on my laptop, This connection is almost always maintained, where I can edit the code and share it with my other dev containers in real time. Follows:

From Vidarh/devbaserun apt updaterun apt-get-y install openssh-server Emacs23-nox htop screen# for Debuggingrun Apt-get-y install sudo wget curl telnet tcpdump# for 32-bit experimentsrun apt-get-y install Gcc-multilib # man pages and most "Viewer:run apt install-y man mostrun mkdir/var/run/sshdentrypoint/usr/sbin/sshd-dvolume [" FileSystem "]EXPOSE 22EXPOSE 8080

Sharing "/home" is enough to allow SSH to be plugged in and proven to meet my needs.

4. The Test in Adifferent environnement containers

One of the reasons I like Docker is that it allows me to test my code in different environments. For example, when I upgrade the Ruby compiler to 1.9, I can generate a dockerfile and derive a 1.8 environment.

From Vidarh/devbaserun apt updaterun apt-get-y install git ruby1.8

Of course you can use RBENV to achieve similar results. But I always find these tools annoying because I like to deploy as much distro-packages as possible, not least because if the work goes well, it makes it easier for others to use my code.

When I have a Docker container, I need a different environment, I only need "Docker run", a few minutes will be a good solution to this problem.

Of course, I can use a virtual machine to achieve my goal, but using Docker is more time-saving.

5. The Buildcontainer

Most of the code I write these days is an interpretive language, but there are some expensive build steps that I don't want to execute every time.

One example is running "bundler" for Ruby applications. Bundler updates cached dependencies for RubyGems and takes time to run a larger app.

Often requires unnecessary dependencies when the application is running. For example, installing a dependent local extension gems usually requires a lot of packages-usually no records-to start easily by adding all the build-essential and its dependencies. At the same time, you can let bundler do all the work beforehand, I really do not want to run it in the host environment because it may not be compatible with the container I am deploying.

One solution is to create a build container. If the dependencies are different, you can create separate dockerfile, or you can reuse the main app Dockerfile and rewrite the command to run the build commands you need. Dockerfile is as follows:

From Myapprun apt updaterun apt install-y build-essential [Assorted dev packages for libraries]volume ["builds"] Workdir/buildcmd ["Bundler", "Install", "--path", "noun", "--standalone"]

Then, whenever there is a dependency update, the above code can be run, while the Build/source directory is mounted under the container's "builds" path.

6. Theinstallation Container

It's not my forte, but it's really worth mentioning. Excellent nsenter and Docker-enter tools are installed with an option for the now popular Curl | Bash mode is a great step forward, and it implements "build Container" mode by providing a Docker container.

This is the last part of the dockerfile to download and build an appropriate version of the Nsenter:

ADD Installer/installercmd/installer

"Installer" is as follows:

#!/bin/shif mountpoint-q Target; Then echo "Installing Nsenter to/target cp/nsenter/target echo" Installing Docker-enter to/target "Cp/docker-enter" Targetelse echo "Target is isn't a mountpoint." echo "can either:" echo "-re-run this container with-v/usr/local/bin:/ Target "echo"-Extract the Nsenter binary (located at/nsenter) "fi

Although there may be malicious attackers attempting to exploit the container's potential privilege escalation problem, the attack surface is at least significantly smaller.

This pattern attracts most people because it avoids the very dangerous errors that developers occasionally make when installing scripts.

7. Thedefault-service-in-a-box Containers

When I take an app seriously and relatively quickly prepare a suitable container to handle the database, I find it commendable that there are already a series of "basic" infrastructure containers that can be adapted to meet my needs.

Of course you can get the "main" section through "Docker run", and there are a lot of alternatives in the Docker index, but I like to check them first, find out how to handle the data, and then add the modified version to my "library".

For example BEANSTALKD:

From debian:wheezyenv debian_frontend noninteractiverun apt-get-q updaterun apt-get-y Install Build-essentialADD <a href= "Http://github.com/kr/beanstalkd/archive/v1.9.tar.gz" >http://github.com/kr/beanstalkd/archive/ v1.9.tar.gz</a>/tmp/run cd/tmp && tar zxvf v1.9.tar.gzrun cd/tmp/beanstalkd-1.9/&& makeRUN CP/ Tmp/beanstalkd-1.9/beanstalkd/usr/local/bin/expose 11300CMD ["/USR/LOCAL/BIN/BEANSTALKD", "-N"]

8. Theinfrastructure/glue Containers

Many of these patterns focus on the development environment (which means there is a production environment to discuss), but there is a large category missing:

Container The purpose is to combine your environment into a whole, which is so far the area for me to study further, but I will mention a special example:

To easily access my container, I have a small haproxy container. I have a wildcard DNS entry to point to my primary server, and a iptable entry open access to my haproxy container. Dockerfile Nothing Special:

From Debian:wheezyadd wheezy-backports.list/etc/apt/sources.list.d/run apt updaterun apt-get-y install HaproxyADD Haproxy.cfg/etc/haproxy/haproxy.cfgcmd ["Haproxy", "-db", "F", "/etc/haproxy/haproxy.cfg"]expose 80EXPOSE 443

What's interesting here is haproxy.cfg.

Backend test ACL Authok Http_auth (adminusers) http-request auth Realm Hokstad If!authok server S1 192.168.0.44:8084

If I wanted to be special, I would deploy something like AirBnB ' s Synapse, but that was beyond my needs.

Expanding the size of the container at work is designed to make deployment applications simple and easy, just as I'm transitioning to a complete, docker-oriented private cloud system.

Original link: Eight Docker Development Patterns (Compile/Wei revisers/Zhou Xiaolu)

If you need more information about Docker or technical documentation to access the Docker technology community, if you have more questions, please put it in the Dcoker Technical Forum and we will invite experts to answer. Purchase tickets and other issues can be consulted QQ group: 303806405.

Container Technical daily public account has been opened, welcome attention!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.