Summary of using Docker to deploy Python applications

Source: Internet
Author: User
Tags nginx reverse proxy
This article is derived from the valuable experience accumulated by the author team during the long-term development process. Supervisor, Gunicorn, and Nginx are the most commonly used software for developing Web applications using Python, therefore, these best practices are of great reference value to readers who intend to deploy Python applications using Docker. At the same time, I hope that you will share your "pitfalls" and valuable experiences in your daily practice and make common progress! We can use Docker to deploy Python applications in a simple and efficient manner. at the same time, we also have some best practices to help us with our pleasure .. this article is derived from the valuable experience accumulated by the author team during the long-term development process. Supervisor, Gunicorn, and Nginx are the most commonly used software for developing Web applications using Python, therefore, these best practices are of great reference value to readers who intend to deploy Python applications using Docker. At the same time, I hope that you will share your "pitfalls" and valuable experiences in your daily practice and make common progress!

We can use Docker to deploy Python applications in a simple and efficient manner. at the same time, we also have some best practices to help us complete the deployment happily. Of course, it doesn't mean that these best practices are the only way to complete the deployment, but our team finds them highly available and easy to maintain. Note that most of the content in this article only represents my position. There are many Docker-based implementation methods and you can choose from them. I will not introduce Volume too much in this article, because it may need a separate topic to explain. We usually use Volume to copy the source code to the container, instead of re-building each time we run it.

DEBIAN_FRONTEND

Docker users should be familiar with this environment variable and tell the operating system where to obtain user input. If it is set to "noninteractive", you can directly run the command without having to request the user (note: All operations are non-interactive ). This is especially useful when running the apt-get command, because it will constantly prompt the user for the step and need to be continuously confirmed. In non-interactive mode, the default option is selected and the build is completed as quickly as possible.

Make sure that you only set this option in the RUN command called in Dockerfile, instead of using the ENV command for global settings, because the ENV command will take effect throughout the container running process, so when you use BASH to interact with the container, if you make global settings, there will be problems. Example:

# Correct practice-only set the ENV variable RUN DEBIAN_FRONTEND = noninteractive apt-get install-y python3 for this command # incorrect practice-set the ENV variable for any subsequent commands, including the running container ENV DEBIAN_FRONTEND noninteractiveRUN apt-get install-y python3
Requirements.txt

Compared with the basic code (codebase), application dependencies are rarely changed. Therefore, we can install project dependencies in Dockerfile, this can also speed up subsequent building (subsequent building only requires building changed code ). The hierarchical building of Docker containers can cache the installation process of dependencies, so the later building speed will be very fast, because it does not need to be re-downloaded or built.

File order

According to the above idea (using cache), it is very important to infer the order in which files are added to the container. We should place frequently changed files below the Dockerfile to make full use of the cache to accelerate the Docker build process. For example, if the application configuration, system configuration, and dependencies are rarely changed, we can place them on the top of Dockerfile. The source files, such as routing files, views, and database code, often change, so we can place them below the Dockerfile, note that the Docker configuration command (EXPOSE, ENV, etc.) is below.

In addition, do not consider how to copy your files to Docker. it will not speed up your construction, because most files are not used at all, such as application source files.

Application key

We never knew how to securely pass the application key to the application. later, we found thatdocker runInenv-fileParameters. We will put all the keys and configurations in the app_config.list file and deliver them to the application through this file. The details are as follows:

docker run -d -t -—env-file app_config.list 
 

This method allows us to simply change the application settings and keys without recreating a container.

Note: Make sure that app_config.list is in the. gitignore File Record, otherwise it will not be recorded in the source file.

Gunicorn

We use Gunicorn as the application server inside the container. Gunicorn is very stable and has good performance. it has many configuration options, such as specifying the number and type of workers (such as green threads and gevent) you can adjust the application according to the load to achieve the best performance.

It is easy to start Gunicorn:

# Install pip3 install gunicorn # run the server gunicorn api: app-w 4-B 127.0.0.1: 5000

Finally, run your application server behind Nginx so that you can perform load balancing.

Supervisord

Have you ever thought about running multiple processes in a container? I think Supervisord is definitely your best auxiliary tool. Suppose we want to deploy such a container, which includes Nginx Reverse proxy and Gunicorn application. You can implement it through the BASH script, but it makes us want to be more concise.

Supevisor is a process control system that allows you to monitor and control processes on UNIX-like operating systems ". Sounds perfect! You need to install the Supervisor in your Docker container first.

RUN DEBIAN_FRONTEND=noninteractive apt-get install -ysupervisor

To let the Supervisor know what to run and how to manage the process, we need to write a configuration file for it.

[Supervisord] nodaemon = true # This will enable supervisord to run on the frontend [program: nginx] # Name of the first program you want to run command =/usr/sbin/nginx # path of the nginx executable program startsecs = 5 # if nginx is enabled for 5 s, it is deemed that the startup is successful [program: app-gunicorn] command = gunicorn api-w 4-B 127.0.0.1: 5000 startsecs = 5

This is a very basic configuration. it also has many configuration items, such as control logs, stdout/stderr redirection, and restart policies. This tool is really good.

Once you complete the configuration, make sure Docker copies it to the container.

ADD supervisord.conf /etc/supervisord.conf

Use the Supervisor as the container's self-starting command.

CMD supervisord -c /etc/supervisord.conf

It will run Gunicorn and Nginx when the container is started. If they have been configured, they will be restarted as needed.

What we learned and what we will do in the future

We have spent a long time deploying code in Docker and will spend more time in the next step. The most important experience we have learned when using Docker is how to minimize think minimally ). Running your entire system in a container is really attractive, but running the application process in the respective containers of the application is easier to maintain. Generally, we run the Nginx and Web servers in containers. in some scenarios, using a separate container to run Nginx has no advantage, but it may only increase complexity. We found that in most cases, its overhead in the container is acceptable.

I hope this information will be of value to you! When our team learns more best practices, I will update this article.

The above is a summary of how to deploy Python applications using Docker. For more information, see other related articles in the first PHP community!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.