Summary of ways to deploy Python apps using Docker

Source: Internet
Author: User
Tags docker run
This article derives from the valuable experience that the authors team has summed up in the long-term development process, where supervisor, Gunicorn, and Nginx are the most common software used to develop Web applications using Python, so for readers who intend to use Docker to deploy Python applications, These best practices are of great reference value. At the same time, I hope you in the daily practice process, you can also step on the "pit" and valuable experience to share, we progress together!

We can use Docker to deploy Python applications simply and efficiently, and there are some best practices to help us to complete the deployment happily. Of course, this is not to say that these best practices are the only way to complete deployment, except that our team finds them highly available and easy to maintain. Note that most of the content in this article just stands for my position, there are many ways to implement Docker, and you can choose freely. I will not introduce volume too much in this article, as it may require a separate topic to explain. We usually use volume to copy the source code into the container instead of rebuilding it every time we run it.


Docker users should be familiar with this environment variable, which tells the operating system where to get user input. If set to "noninteractive", you can run the command directly without asking the user for input (note: All actions are non-interactive). This is especially useful when running the Apt-get command, as it keeps prompting the user to get to the next step and needs constant confirmation. Non-interactive mode selects the default options and completes the build as quickly as you can.

Make sure that you set this option only in the Run command that is called in Dockerfile, rather than using the ENV command as a global setting, because the ENV command will take effect throughout the container, so when you interact with the container through bash, there will be a problem if you make global settings. Examples are as follows:

# Correct Practice-Set the ENV variable for this command only run debian_frontend=noninteractive apt-get install-y python3# Wrong Procedure-set the ENV variable for any subsequent command, including the running ground container ENV debian_frontend noninteractiverun apt-get install-y Python3


Application dependencies are rarely changed in comparison to the basic code (CODEBASE), so we can install project dependencies in Dockerfile, which can also speed up subsequent builds (after which the build requires code to build the changes). A docker container hierarchy can cache a dependent installation, so it will be built very quickly, since it does not require re-downloading and building dependencies.

File order

The order in which the files are added to the container is critical, as the above idea (using caching) infers. We should place frequently changed files below Dockerfile in order to fully use the cache to speed up the Docker build process. For example, application configuration, System configuration, and dependencies are rarely changed, so we can put them on top of the dockerfile. The source files, such as routing files, views, and database code, are often changed, so we can put them under the dockerfile, notice that the Docker configuration commands (EXPOSE, env, etc.) are below.

Also, don't think about how to copy your files to Docker, which doesn't speed up your build because most files won't be used at all, such as applying source files.

App Key

We did not know how to pass the application key safely to the application, and later we found that we could use docker run the parameters in the command env-file . We will put all the keys and configuration in the App_config.list file, and then pay the application through this file. Specific as follows:

Docker run-d-T--env-file app_config.list <image:tag>

This approach allows us to simply change the application settings and keys without rebuilding a container.

Note: It is important to ensure that app_config.list is in the record of the. gitignore file, otherwise it will not be roll call into the source file.


We use Gunicorn as the application server inside the container, Gunicorn is very stable and performance is very good, it has a lot of configuration options, such as specifying the number and type of worker (green threads, gevent, etc.), you can adjust the application according to the load, For optimal performance.

Starting the Gunicorn is simple:

# Installing PIP3 install gunicorn# running server Gunicorn api:app-w 4-b

Finally, run your application server behind Nginx so you can load balance .


Have you ever thought of running multiple processes in a container? I think Supervisord is definitely your best aid tool. Suppose we want to deploy a container that contains nginx reverse proxies and gunicorn applications. You might be able to do that with a bash script, but let's try to be more concise.

Supevisor is "a Process control system that enables users to monitor and control some processes on Unix-like operating systems." It sounds perfect! You need to install supervisor in your Docker container first.

RUN debian_frontend=noninteractive Apt-get Install-ysupervisor

In order for supervisor to know what to run and how to manage the process, we need to write a configuration file for it next.

[Supervisord]nodaemon = True  # This will let Supervisord run on the front end [Program:nginx]  # you want to run the first program named command =/usr/sbin/nginx  # nginx Executable Path startsecs = 5  # If Nginx remains on 5s, we are deemed to have started successfully [Program:app-gunicorn]command = Gunicorn api-w 4-b 127.0. 0.1:5000startsecs = 5

This is a very basic configuration, it also has a lot of configuration items, such as control log, stdout/stderr redirection, restart policy and so on. That's a good tool.

Once you have completed the configuration, make sure that Docker copies it to the container.

ADD supervisord.conf/etc/supervisord.conf

Let supervisor as the container's self-start command.

CMD supervisord-c/etc/supervisord.conf

It will run Gunicorn and nginx when the container is started. If they are already configured, they will be restarted on demand.

What to learn and what to aim for in the future

We've spent a lot of time deploying code in Docker, and we're going to devote more time to it. In the process of using Docker, the most important lesson we learned is how to minimize thinking (think minimally). It's tempting to run your entire system in one container, but it's easier to maintain the application process in the application's own container. In general, we run NIGNX and Web servers in containers, and in some scenarios, using a separate container to run Nginx without any advantage, it may only add complexity. We found that in most cases, it is acceptable to have a cost in the container.

I hope this information is valuable to you! When our team learns more best practices, I will update this article.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.