Docker Practical Guide: containerized a python web App

Source: Internet
Author: User
Tags vps docker ps docker hub docker run docker registry
Practical Guide to Docker: Containerizing Python Web Applications
Provide Zstack community

Foreword
Web applications can be used by an attacker at any time to gain access to the entire host. This is a very common and scary thing. For higher security, it is necessary to isolate different applications (especially if these applications belong to different users). However, the realization of this isolation has always been a challenge. So far, there have been many ways to achieve isolation, but they are either too expensive (in terms of time and resources) or too complex (for developers or administrators).

This article will discuss how to make "containerized" Python web applications run in a secure sandbox and stick strictly to their respective environments (unless you specify that they "connect" to other applications). I will introduce step by step how to create a Docker container, how to use this container to run our Python web application, and how to use a Dockerfile to describe the entire build process for complete automation.

Contents 1. Docker overview 2. Install Docker on Ubuntu 3. Basic Docker commands
Docker daemon and command line
Docker command
4. Create a Docker container as a sandbox for Python WSGI applications
Create a base Docker container in Ubuntu
Preparations before installation
Install Python tools
Install our web application and its dependencies
Configure our Python WSGI application
5. Create Dockerfile to automate image creation
Dockerfile overview
List of Dockerfile commands
Create Dockerfile
Defining basic terms
Update the default app repository
Install basic tools
Python Basic Package Installation Guide
Deploying the application
guide
The final Dockerfile
Automated container creation using Dockerfile
Docker overview
The Docker project provides some high-level tools that can be used together. These tools are based on some functions of the Linux kernel. The goal of the entire project is to help developers and system administrators painlessly migrate applications (and all the dependencies they involve), so that applications can run happily on various systems and machines.

The key to achieving this goal is an operating environment called a docker container. This environment is actually a LXC (Linux Containers) with security attributes. The container is created using a Docker image. Docker images can be created manually by typing commands, or they can be created automatically through Dockerfiles.

Note: For the basics of Docker (daemon, CLI, image, etc.), you can refer to the first article in this series, Docker Explained: Getting Started.

Install Docker on Ubuntu
The latest version of Docker (translation: This article was written on December 17, 2013, when the latest version of Docker was 0.7.1) can be deployed on multiple Linux distributions such as Ubuntu / Debian and CentOS / RHEL (you can also use A ready-made Docker image on DigitalOcean, which is based on Ubuntu 13.04).

Let's quickly introduce the installation process on Ubuntu.

Install Docker on Ubuntu
Update the system:

sudo aptitude update
sudo aptitude -y upgrade
Check if your system supports aufs:

sudo aptitude install linux-image-extra-`uname -r`
Add the Docker repository key to apt-key (for package verification):

sudo sh -c "wget -qO- https://get.docker.io/gpg | apt-key add-"
Add the Docker repository to the aptitude software source:

sudo sh -c "echo deb http://get.docker.io/ubuntu docker main> /etc/apt/sources.list.d/docker.list"
Update the system again after adding:

sudo aptitude update
Finally, download and install Docker:

sudo aptitude install lxc-docker
The default setting of Ubuntu's default firewall (UFW) is to refuse all forwarding, but Docker needs to be forwarded, so you also need to set up UFW.

Open the UFW configuration file with the nano editor:

sudo nano / etc / default / ufw
Find the line DEFAULT_FORWARD_POLICY and change

DEFAULT_FORWARD_POLICY = "DROP"
Replaced with:

DEFAULT_FORWARD_POLICY = "ACCEPT"
Press CTRL + X and then Y to save and exit.

Finally, restart UFW:

sudo ufw reload
Basic Docker commands
Before we start, let's review some basic commands introduced in the basics last time.

Docker daemon and command line
Generally, the Docker daemon runs in the background after you have completed the installation, waiting to receive instructions from the Docker command line. But sometimes we also need to manually start the Docker daemon:

sudo docker -d &
The basic syntax of the Docker command line is as follows:

sudo docker [option] [command] [arguments]
Note: Docker requires sudo permissions to run.

Docker command
The following is a list of currently available Docker commands (translation note: you can refer to the InfoQ Chinese station article to explain Docker (two): Docker command line exploration):

attach to a running container
build build image from a Dockerfile
commit creates a changed container as a new image
cp copy files / directories between container and local file system
create a new container
diff detects container file system changes
events Get real-time events from the server
exec executes a command in a running container
export Export a container's file system as a tarball
history shows the history of a mirror
images list of images
import import content from a tarball archive to create a file system image
info show system information
inspect returns the underlying information of the container or image
kill kill a running container
load Load an image from a tarball or STDIN
login Login to the Docker registry
logout logout from Docker registry
logs grab a container's log
network Docker network
pause Pause all processes in a container
port List all port mappings or specified port mappings for this container
ps list container
pull pull an image or repository from the registry
push pushes an image or repository to the registry
rename rename container
restart restart the container
rm delete one or more containers
rmi delete one or more images
run run a command in a new container
save save the image to a tarball
search Search images on Docker Hub
start starts one or more containers
stats Real-time information flow showing container resource usage
stop stop a running container
tag tag an image to the registry
top Display processes running under a container
unpause resumes all suspended processes in a container
update update container resources
version displays Docker version information
volume Docker volume management
wait blocks other calls to the specified container until the container stops and exits blocking.
Create a Docker container as a sandbox for Python WSGI applications
We have completed the installation of Docker and are familiar with the basic commands. Now we can create Docker containers for our Python WSGI application.

Note: The methods described in this chapter are mainly for practice and are not suitable for production environments. The automated processes applicable in a production environment are described in subsequent chapters.

Create a base Docker container in Ubuntu
Docker's run command creates a new container based on the Ubuntu image. Next, we need to attach a terminal to this container with the -t flag and run a bash process.

We will expose port 80 of this container for external access. In more complex environments in the future, you may need to load balance multiple instances, "connect" different containers, and use a reverse proxy container to access them.

sudo docker run -i -t -p 80:80 ubuntu / bin / bash
Note: When running this command, Docker may need to download an Ubuntu image before creating a new container after downloading.

Note: Your terminal will "attach" to the newly created container. To detach from the container and return to the previous terminal access point, press CTRL + P followed by CTRL + Q to perform the detach operation. Being "attached" to a Docker container is basically equivalent to accessing another VPS from within one VPS.

To return from the detached state to the attached state, you need to perform the following steps:

List all running containers with sudo docker ps
Find the ID of the container created earlier
Execute sudo docker attach [id] to complete the current terminal attachment to the container
Note: Everything we do inside the container will be limited to execution inside the container, and it has no effect on the host.

Preparations before installation
To deploy a Python WSGI application (and the tools we need) in a container, we first need the corresponding software repository. However, there is no repository provided in Docker's default Ubuntu image (the designer of Docker thinks this is helpful to keep things simple), so we need to add our Ubuntu software repository to this base image:

echo "deb http://archive.ubuntu.com/ubuntu/ $ (lsb_release -sc) main universe" >> /etc/apt/sources.list
Update the software list:

apt-get update
Then install some necessary tools to our container:

apt-get install -y tar git curl nano wget dialog net-tools
                   build-essential
Install Python tools
This article will use a simple Flask application as an example. It doesn't matter if you use other frameworks, the installation and deployment methods are the same.

Once again, all the following commands are executed inside the container and will not affect the host. You can imagine yourself operating on a brand new VPS.

Install Python and pip:

# Install pip dependency: setuptools
apt-get install -y python python-dev python-distribute python-pip
Install the web application and its dependencies
Before installing our application, let's make sure that all dependencies are ready. The first is our framework-Flask.

Because we have already installed pip, we can directly install Flask with pip:

pip install flask
With Flask installed, create a "my_application" folder:

mkdir my_application

cd my_application
Note: If you want to deploy your own application directly (instead of the demo application here), you can refer to the "Tips" section below.

Our demo application is a single-page "Hello World" Flask application. Let's use nano to create app.py:

nano app.py
Copy the following into the newly created In the file:

from flask import Flask
app = Flask (__ name__)

@ app.route ("/")
def hello ():
    return "Hello World!"

if __name__ == "__main__":
    app.run ()
Press CTRL + X and then Y to save and exit.

Alternatively, you can use "requirements.txt" to define application dependencies (such as Flask). We still use nano to create the file:

nano requirements.txt
Enter all your dependencies in the file (only two are listed below, if you need others please add them yourself):

flask
cherrypy
Press CTRL + X and then Y to save and exit.

Note: You can use pip to generate custom dependency lists. The specific operation method can refer to this Common Python Tools: Using virtualenv, Installing with Pip, and Managing Packages.

Finally, the file organization structure of our application is this:

/ my_application
    |
    |-requirements.txt # file describing dependencies
    |-/ app # Application module (your application should be in this directory)
    |-app.py # WSGI file, which should contain the instance name of "app" (callable)
    |-server.py # optional, used to run the application server (CherryPy)
Note: For "server.py", please refer to the section "Configuring our Python WSGI application" below.

Note: The files and directories of the above applications are created inside the container. If you want to automatically build an image on the host machine (this process will be described in the following chapter on Dockerfile), then the directory where you put the Dockerfile on your host machine also needs the same file structure.

Tip: Deploy your own application Get the software repositories and dependencies required by the application inside the container
The above steps describe the process of creating an application directory within a container. However, in real scenarios, we often need to pull source code from software repositories.

There are several ways to copy your software repository into the container. Here are two of them:

# method 1
# Download source code with git
# Usage: git clone [URL where the source code is located]
# Demonstration:
git clone https://github.com/mitsuhiko/flask/tree/master/examples/flaskr

# Method 2
# Download the compressed source code file
# Usage: wget [URL where the source code compressed file is located]
# Demonstration: (replace the fake one below with the real URL)
wget http://www.github.com/example_usr/application/tarball/v.v.x

# Extract the files
# Usage: tar vxzf [file name .tar (.gz)]
# Demonstration: (replace the fake one below with the real file name)
tar vxzf application.tar.gz

# Download and install application dependencies with pip
# Download requirements.txt (can be generated with pip freeze output), and then install all with pip:
# Usage: curl [URL of the requirements.txt file] | pip install -r-
# Demonstration: (replace the fake one below with the real URL)
curl http://www.github.com/example_usr/application/requirements.txt | pip install -r-
Configure our Python WSGI application
To run this application, we need a web server. The web server running this WSGI application needs to be installed in the same container as the code as a process running as the Docker container.

Note: We will use the HTTP web server that comes with CherryPy in the demonstration. This is a relatively simple and can be used in production environment. You can also use Gunicorn or even uSWGI (you can run them behind Nginx), we have introduced this usage in other tutorials.

Download and install CherryPy with pip:

pip install cherrypy
Create "server.py" to serve the web application in "app.py":

nano server.py
Copy and paste the following into server.py:

# Import application syntax:
# from app import application
# Demonstration:

from app import app

# Import CherryPy
import cherrypy

if __name__ == ‘__main__’:

    # Mount application
    cherrypy.tree.graft (app, "/")

    # Detach from the default server
    cherrypy.server.unsubscribe ()

    # Instantiate a new server object
    server = cherrypy._cpserver.Server ()

    # Configure the server object
    server.socket_host = "0.0.0.0"
    server.socket_port = 80
    server.thread_pool = 30

    # SSL related configuration
    # server.ssl_module = ‘pyopenssl’
    # server.ssl_certificate = ‘ssl / certificate.crt’
    # server.ssl_private_key = ‘ssl / private.key’
    # server.ssl_certificate_chain = ‘ssl / bundle.crt’

    # Subscribe to this server object
    server.subscribe ()

    # Start the server engine

    cherrypy.engine.start ()
    cherrypy.engine.block ()
carry out! Now we have a "Dockerized" Python web application that runs securely in our own sandbox. Just enter the following line of command, it can serve thousands of client requests:

python server.py
This is the instruction to make the server run in the foreground. Press CTRL + C to terminate the run. If you want to run the server in the background, you can enter the following command:

python server.py &
Applications running in the background need to be terminated (kill or stop) using a process manager (such as htop).

Note: For the configuration of CherryPy running Python applications, please refer to this tutorial: How to deploy Python WSGI apps Using CherryPy Web Server.

Simply test the running status of the application (and the port allocation status): Visit http: // [IP address of the VPS where the container is located] in your browser and you should be able to see "Hello World!"

Create Dockerfile to automate image creation
As mentioned briefly above, this method of manually creating a container is not suitable for deployment in a production environment. Dockerfile should be used in production environment to automate the build process.

We already know how to download and install external resources inside the container, so the Dockerfile is actually the same principle. A Dockerfile defines how Docker generates an image, which can be used directly to run our Python application.

First understand the basic functions of Dockerfile.

Dockerfile overview
Dockerfile is a script file that contains a series of commands that are executed sequentially. By executing these commands, Docker can create a new Docker image. This greatly facilitates deployment.

Dockerfile generally uses the FROM command to define a base image, and then executes a series of actions. After all the actions are performed, a final image is formed and the completed image is submitted to the host.

use:

# Create a mirror with Dockerfile at the current position
# Mark the generated image as [name] (eg nginx)
# Demonstration: sudo docker build -t [name].
sudo docker build -t nginx_img.
Note: We also have an article dedicated to dockerfiles available for review: Docker Explained: Using Dockerfiles to Automate Building of Images

List of Dockerfile commands
### Add
Copy files from host to container


### CMD
Sets the default command to be executed or sent to ENTRYPOINT


### ENTRYPOINT
Set the app to launch by default in the container


### ENV
Set environment variables (key = value)

### EXPOSE
Expose a port

### FROM
Set the base image (base)

### MAINTAINER
Set author / owner information for Dockerfile

### RUN
Execute a command and submit the (container) image after execution

### USER
Set username for running container from image

### VOLUME
Load a directory from the host to the container

### WORKDIR
Set the directory where the CMD runs
Create Dockerfile
Create a Dockerfile with the nano editor in the current path:

sudo nano Dockerfile
Note: The following content needs to be added to the Dockerfile in order.

Defining basic terms
The basic items of the Dockerfile include the FROM original image (such as Ubuntu) and the maintainer name MAINTAINER:

################################################### ###########
# Create a Dockerfile for the Python WSGI application container
# Based on Ubuntu
################################################### ###########

# Set Ubuntu as the base image
FROM ubuntu

# Document author / maintainer
MAINTAINER Maintaner Name
Update the default app repository
# Add the URL of the software resource library
RUN echo "deb http://archive.ubuntu.com/ubuntu/ $ (lsb_release -sc) main universe" >> /etc/apt/sources.list

# Update resource list
RUN apt-get update
Install basic tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
Note: Some of the above tools may not be used by you, but just in case they are installed first.

Python Basic Package Installation Guide
Some Python tools (pip) are best installed first. Both your framework (WAF) and web server (WAS) need them to install.

RUN apt-get install -y python python-dev python-distribute python-pip
Deploying the application
To deploy an application, you can use Docker's ADD command to copy the source code directly, or you can use the REQUIREMENTS file to do it in one step.

Note: If you plan to use a file to describe all code locations, you can refer to the file structure below.

    File structure demonstration

    / my_application
    |
    |-requirements.txt # file describing dependencies
    |-/ app # Application module (your application should be in this directory)
    |-app.py # WSGI file, inside Should contain the instance name of "app" (callable)
    |-server.py # optional, used to run the application server (CherryPy)
The process of creating this file structure has been introduced in the previous chapters, so I won't go into details here. In short, taking the above file structure as an example, add the following to the end of the Dockerfile and copy the source code into the container:

ADD / my_application / my_application
If the source code is a public git repository, you can use the following:

RUN git clone [your source repository URL]
guide
Next, import all dependencies from requirements.txt:

# Use pip to download and install the contents of requirements.txt
RUN pip install -r /my_application/requirements.txt

# Exposed port
EXPOSE 80

# Set the default path for CMD operation
WORKDIR / my_application

# Default command to run
# This command is executed when a new container is created
# Such as starting CherryPy to run the service
CMD python server.py
The final Dockerfile
The entire Dockerfile should now look like this:

################################################### ###########
# Dockerfile to build Python WSGI Application Containers
# Based on Ubuntu
################################################### ###########

# Set the base image to Ubuntu
FROM ubuntu

# File Author / Maintainer
MAINTAINER Maintaner Name

# Add the application resources URL
RUN echo "deb http://archive.ubuntu.com/ubuntu/ $ (lsb_release -sc) main universe" >> /etc/apt/sources.list

# Update the sources list
RUN apt-get update

# Install basic applications
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential

# Install Python and Basic Python Tools
RUN apt-get install -y python python-dev python-distribute python-pip

# Copy the application folder inside the container
ADD / my_application / my_application

# Get pip to download and install requirements:
RUN pip install -r /my_application/requirements.txt

# Expose ports
EXPOSE 80

# Set the default directory where CMD will execute
WORKDIR / my_application

# Set the default command to execute
# when creating a new container
# i.e. using CherryPy to serve the application
CMD python server.py
Press CTRL + X and then Y to save and exit.

Automated container creation using Dockerfile
As mentioned in the previous basic tutorial section, the way Dockerfile works utilizes the docker build command.

We instruct Docker to copy the content from the path containing the source code to the container through the Dockerfile, so be sure to confirm the relative position of the Dockerfile and the code path before building.

Such a Docker image can quickly create a container that can run our Python application. All we need to do is enter this line of instructions:

sudo docker build -t my_application_img.
We named this image my_application_img. To start a new container from this image, just enter the following command:

sudo docker run -name my_application_instance -p 80:80 -i -t my_application_img
Then you can enter your VPS IP address in the browser to access the application.

For more tutorials on Docker installation (including installing Docker on other distributions), check out our docker installation documentation on docker.io.

This article is from DigitalOcean Community. English original text: Docker Explained: How To Containerize Python Web Applications by O.S. Tezer

Translation: lazyca

Practical Guide to Docker: Containerizing Python Web Applications


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.