Guide |
Docker is becoming more and more popular, but actually deploying Docker in a production environment is a relatively new concept, and there is no standard process. The author is a programmer of Ror, who, in conjunction with the usual deployment experience, contacted Docker to share its practice of using Docker to deploy applications in a production environment. |
Docker is a good choice for developing applications now, because for a development group, deploying an application is no longer as cumbersome as modifying and setting up the configuration file, because it "masks" the application's operating environment for Docker, whether you use a Mac, Linux or Windows can run the same way.
However, when you use Docker to deploy your application to production, you feel that Docker is still somewhat "weak", at least from the point of view of Ruby on Rails (ROR). When I looked up and tested a lot of different deployment methods with the Docker image, I found that there was really no exact and standard deployment scenario. In this article I will share a best practice for deploying ROR applications in a production environment.
Standard
Before we actually do this, we list the criteria for deploying the application in the production environment:
- Easy to use: Deploying the app itself should be simple, or the process of deploying a new program becomes "scary".
- 0 service interruption: Let's face it-0 service outages deploying ROR applications has become the norm today.
- Automated deployment: I'm more used to pushing code to the code warehouse, and then using a tool like Codeship to automate testing, testing the server that automatically deploys the code to the production environment after passing. I want Docker to do the same job.
# # operation as I said before, I want the deployment process to be as simple as possible. If you've seen docker:part4 this video, you might be familiar with the following command, which launches a container called db (Run Postgres database), then launches a container called Web, and finally joins the container "Web" with the container "db".
$ docker run-d--name DB training/postgres$ docker run-i-T--name Web--link db:db-p 45,000:80
Of course, if you do this to deploy the program, after you've knocked on a lot of these commands and made sure you don't miss out on many of these commands, you'll find this a "pit-daddy" nightmare. This is why there is a fig.
FIG
If you use Dockerfile to define how your container is generated, then fig can help you define the entire container's operating framework. Fig encapsulates operations such as "add volumes", "Connection container" (Link container) and "map port" into a YAML profile, as described in the preceding Codetv, which is simplified to the following form in Fig:
Web:build:. ports:-"80:80" links:-dbdb:image:postgresports:-"5432" volumes:-/etc/postgresql-/var/log/postgresql-/ Var/lib/postgresql
I defined two containers in Yaml: Web and DB; The container Web was generated from Dockerfile under the current folder, exposing port 80th outward, and linking to container db. The container db generates a PostgreSQL mirror from Dockerhub and exposes Port No. 5432 outward. Using this YAML configuration file, fig can build the container with the following command, and then start them according to the intent of the configuration file.
$ fig build$ Fig up-d
The fig starts the linked container db first, so that the container web is not connected to the database. The- d parameter indicates how the container will start in the future, so that the container is still running after the user has logged out of the operating system. You can get more configuration information by logging on to the official website of Fig.
Deployment
Now we can easily start a Docker container, but how do you deploy a Docker container in a production environment? If you install fig and Docker in a production environment, all we have to do is clone the container image before cloning and start the container with the same fig command. However, the question now is how to update the containers that are running on the line.
Unfortunately, fig can start a container very gracefully, but it is not good at updating and restarting the service. Of course, you can pull the update of the program in the Code repository and rerun the above fig command to do this, but in the process of updating the code and restarting the container, it is not able to provide service to the outside. To deal with this situation, we use native Docker commands and introduce nginx as a reverse proxy (note: soft load) to solve this problem.
We first modify the port on which the container listens, because Nginx needs to listen on port 80th. We modify this:
Web:build:. ports:-"8080:80" links:-db ...
By modifying the configuration file of fig, our web container is modified to listen on port No. 8080. The Nginx is configured to load balance 8080 and 8081 ports, so the Nginx configuration is as follows:
Upstream Docker {server 127.0.0.1:8080;server 127.0.0.1:8081;} server {Listen 80;location/{Proxy_pass http://docker;}}
After restarting Nginx, Nginx begins to do reverse proxy (soft load) between ports 8080 and No. 8081; When either of these ports fails, nginx forwards the request automatically to another until the failed port resumes. In this way, we can pull the update from Git and then run the following command to start it:
$ docker run-d--name web1--link codetvjournal_db_1:db-p 8081:80 codetvjournal_web:latest
When we determined that port No. 8081 's Web1 container was up and running, we could stop the port No. 8080 service and start updating the port No. 8080 service. I recommend using native Docker commands instead of using fig to do this, as this avoids interfering with the running DB container (note: The author may refer to the previously written Yaml, which contains the configuration of the boot DB container)
We can use the above method to create a number of web containers, as long as they occupy the port and the container name is different, while using nginx in their front-end load can be implemented without dropping the program upgrade.
Automation
So the question comes again, how to automate the above update process? There are two ways to achieve:
- Container update, start-stop, switch and other operations encapsulated in a single script, this script can be added to the traditional on-line process (note: New code pull, automated testing, automatic deployment process, the author called Deployment pipeline) after execution;
- Another way is to use discovery services like Consul or ETCD to manage container updates, start and stop, and discover, which will be more "tall".
So, using Docker to deploy services in a production environment is not as easy as you might think. I recommend that you try the methods described above and share your own hands-on experience to help you use Docker together. Docker is also a very young product, but also a very popular product, it will certainly continue to evolve in the future.
Deploying apps in a production environment using Docker