The Docker has been in use for more than a year and has been installed on both local Linux systems and cloud platforms. At that time learned a lot about managing mirroring, the flexibility to create mirrors for any platform, learn to write some of your own programs that are not relevant to Docker. I've tried to summarize my experience to the following five points, providing references for those who just started using Docker.
Need to be specific when making mirrors
I try to run my application without the root user. Most Linux distributions have one advantage: when you install a service, the operating system usually creates a corresponding system user for you. For example, when Apache is installed, almost every release builds a new type of Http,apache or Www-data user.
I'm going to build a ejabberd mirror from the source code, and the XMPP system user will be created in the builds script. Like most people, I used Docker's automatic build service and set up my mirrors to be rebuilt automatically when the source mirrored Ubuntu update.
But I made a mistake on my own image: The mirror source part was set to Ubuntu instead of ubuntu:12.04. But one day the mirror automatically updates to the latest Ubuntu14.04 and adds a new default system user. This led to the UID plus 1 of my application. I retrieved the ejabberd image of the latest build, but it failed at startup because I used volume to store the Ejabbers file, ejabberd the user does not have read access to the file.
Now I do two things when I create a mirror:
1. In the build each mirror, add a specific version number;
2. Write startup scripts for all applications.
These startup scripts are usually performed as root users and do the following: First and foremost, verify that the required configuration files exist-because they may not exist because of the use of volume. The configuration file and the owner of the data file are then set to the user for whom I run the application.
It saves a lot of time, I know exactly what my mirror is based on, I don't just use my main program as an entry point for mirroring, I write scripts to make sure the environment is reasonable.
I don't know what kind of function a person's system will have.
It was not until recently that I ran the latest version of Docker on the Ubuntu system. When running on some cloud platforms, the following problems are found:
Some people may not be running the latest Docker; some people may not have all the functionality to turn on Docker; some people do not have root privileges on the system that is running Docker;
I've changed the habit of making Docker mirrors: I'm not going to write this again: "You want to run a container with-volumes-from" or "This requires link a container named db." I don't know how other users use my mirrors, I try to make my mirrors as flexible as possible. For example, if mirroring requires a MySQL database, I will use another container and link to it, or an environment variable to set the link address, and so on.
While this adds to the workload, I think it's worth it. Now, I can do all kinds of "weird" things like running a proxy MySQL container connected to an actual MySQL database and using a specific hostname when connecting. It's very neat.
It may be painful to start using dockerfile, and you'll love them for a long time.
There are two ways to create a mirror. One way is simple: Build a container and install the required program, and once the container runs as expected, the container is submitted as a mirror.
Another way is dockerfile, which can be a bit painful to operate. What do I do when the installation package prompts me to enter information? How do I change files interactively? This is a tricky process because there is no way to make a part of it interactive, and all the steps must be fully automated.
Nevertheless, I still feel that the use of dockerfile will outweigh the disadvantages. In the first way, many times I'm not sure if the program is running as I expected, and I can't remember exactly what I did with the mirror. By Dockerfile this way, I can record all of my steps and, even better, manage it with version control software. Although it takes more work to build a mirror using Dockerfile, it's the only way I'm going to build a mirror now.
Be cautious when generating child processes (whether or not using Docker)
It is common for an application to create a subprocess, which I have always used. In most systems, I can create a subprocess, read its output, check the return value when exiting, and so on, and then submit the INIT process for resource reclamation after the program is finished. I've been writing programs like this for years and never thought of anything else.
But for Docker containers, it's not that simple. In most cases, any child processes created will become zombie-consuming system resources. I know how to properly monitor and destroy a subprocess in my own program, so someone else has packaged my program in a mirror without any problems. But if a program that is not properly destroyed is packaged into a mirror, it can cause a lot of trouble.
Each task corresponds to a container does not mean that a container has only one process
There is some debate on this point. Most people agree that each container should perform only one task, but everyone has a different view of whether a container is a single process.
I tend to run a monitoring process (like SUPERVIDORD,RUNIT,S6) whenever I decide what task to run, which is very meaningful for Web applications.
For example I have a Web application that needs Php-fpm,nginx, Cron, and MySQL, I will run the Php-fpm+nginx+cron in a container and run MySQL in another container. While ensuring that a major process exits or crashes, the monitoring process also exits. Docker normal exit behavior can be preserved when the main process exits.
This article translates from the Tutum blog, thanks tutum software engineer Feng Honglin to the article revision!
Original link: The 5 Most Important things I ' ve learned from Using Docker (translation/Wang revisers/Feng Honglin)
If you need more information about Docker or technical documentation to access the Docker technology community, if you have more questions, please put it in the Dcoker Technical Forum and we will invite experts to answer. CSDN Docker Technology Exchange QQ Group: 303806405.
CSDN invites you to participate in China's large data award-winning survey activities, just answer 23 questions will have the opportunity to obtain the highest value of 2700 Yuan Award (a total of 10), speed to participate in it!
The China Large Data Technology Conference (Marvell conference 2014,BDTC 2014) will be held at Crowne Plaza Beijing New Yunnan December 12, 2014 14th. Heritage since 2008, after seven precipitation, "China's large Data technology conference" is currently the most influential, the largest large-scale data field technology event. At this session, you will not only be able to learn about Apache Hadoop submitter uma maheswara Rao G (a member of the project Management Committee), Yi Liu, and members of the Apache Hadoop and Tez Project Management Committee Bikas Saha and other shares of the general large data open source project of the latest achievements and development trends, but also from Tencent, Ali, Cloudera, LinkedIn, NetEase and other institutions of the dozens of dry goods to share. For a limited ticket discount, advance booking is expedited.
Free Subscribe to the "CSDN large data" micro-letter public number, real-time understanding of the latest big data progress!
CSDN large data, focus on large data information, technology and experience sharing and discussion, to provide Hadoop, Spark, Impala, Storm, HBase, MongoDB, SOLR, machine learning, intelligent algorithms and other related large data views, large data technology, large data platform, large data practice , large data industry information and other services.