Deploy applications using Docker in the production environment

Source: Internet
Author: User

Deploy applications using Docker in the production environment
GuideDocker is becoming more and more popular, but deploying Docker in the production environment is still a relatively new concept and there is no standard process yet. The author is a programmer of ROR. Based on his usual deployment experience, he contacted Docker to share with you a practice of deploying applications using Docker in the production environment.

Docker is a good choice for developing applications now, because for a R & D group, deploying an application no longer requires tedious modification and configuration files as before; for Docker, it "blocks" the running environment of the application. You can run the application in the same way whether you use Mac, Linux, or Windows.

However, when you use Docker to deploy an application to a production environment, you will feel that Docker is still somewhat "weak", at least from the perspective of Ruby On Rails (ROR. After finding and testing many different deployment methods and Docker images, I found that there is indeed no definite and standard deployment solution. In this article, I will share the best practices for deploying ROR applications in a production environment.

Standard

Before actual operations, we will list the standards for deploying applications in the production environment:

  1. Easy to use: application deployment should be very simple. Otherwise, the process of deploying new programs will become very "horrible ".
  2. Zero Service Interruption: Let's face it-deploying ROR applications with zero service interruption has become a standard today.
  3. Automated deployment: I prefer to push code to the code repository, and then use tools such as Codeship for automatic testing. After the test passes, the code is automatically deployed to the production environment server. I hope Docker can do the same job.

    # Operations, as I said before, I hope that the deployment process will be simpler and better. If you have read the Docker: Part4 video, you may be familiar with the following commands. It starts a container called db (running ipvs database) and then starts a container called web, finally, connect the container "web" with the container "db.

$dockerrun-d--namedbtraining/postgres$dockerrun-i-t--nameweb--linkdb:db-p45000:80

Of course, if you deploy the program as you do, after you have made many such commands and never missed out, you will find that this is a nightmare. This is why Fig exists.

FIG

If you use Dockerfile to define how to generate your container, Fig can help you define the runtime framework of the entire container. Fig encapsulates "add volume", "link container", and "ing port" operations into a YAML description file; the operation described in CodeTV is simplified as follows in Fig:

web:build:.ports:-"80:80"links:-dbdb:image:postgresports:-"5432"volumes:-/etc/postgresql-/var/log/postgresql-/var/lib/postgresql

I have defined two containers in YAML: web and db. The Container web generates the Dockerfile from the current folder, exposes port 80, and links it to the container db. The container db generates a PostgreSQL image from the DockerHub and exposes port 5432. With this YAML configuration file, fig can generate containers with the following command, and then start them according to the configuration file's intention.

$figbuild$figup-d

Fig will first start the linked container db, so that the container web will not be connected to the database. The-d parameter indicates that the container will be started in the later running mode. This ensures that the container is still running after the user logs out of the operating system. You can log on to the official website of Fig to obtain more configuration information.

Deployment

Now we can easily start a Docker container, but how can we deploy a Docker container in the production environment? If both Fig and Docker are installed in the production environment, all we need to do is clone the previous container image and run the same fig command to start the container. However, the question is how to update the online running containers.

Unfortunately, Fig can start a container very elegantly, but it is not good at updating and restarting the service. Of course, you can pull the updates of the program in the code repository, and then re-run the preceding fig command to achieve this goal. However, when the container updates the code and restarts, you cannot provide external services. To cope with this situation, we use the native Docker command and introduce Nginx for reverse proxy (Note: Soft load) to solve this problem.

First, modify the port listened to by the container because Nginx needs to listen to port 80. Let's modify it as follows:

web:build:.ports:-"8080:80"links:-db...

By modifying the Fig configuration file, our web container is changed to listening to port 8080. Nginx must be configured with load balancing between ports 8080 and 8081. Therefore, the Nginx configuration is as follows:

upstreamdocker{server127.0.0.1:8080;server127.0.0.1:8081;}server{listen80;location/{proxy_passhttp://docker;}}

After Nginx is restarted, Nginx starts to implement a reverse proxy (soft load) between port 8080 and port 8081. When any of the ports fails, Nginx automatically forwards requests to the other, after the failure, the port is restored. In this way, we can pull updates from Git, and then run the following command to start it:

$dockerrun-d--nameweb1--linkcodetvjournal_db_1:db-p8081:80codetvjournal_web:latest

After we confirm that the web1 container of port 8081 is started and the service is normal, we can stop the service of port 8080 and start to update the service for port 8080. I recommend that you use native docker commands instead of Fig to do this, because it can avoid interfering with running db containers (note: the author may refer to the previously written YAML, including the configuration for starting the db container)

We can use the above method to create many web containers, as long as the ports they occupy are different from the container name. At the same time, we can use Nginx to load their front-end to achieve non-dropped programs.

Automation

Then again, how can we automate the above update process? There are two ways to achieve:

  1. Encapsulate operations such as container update, start/stop, and switch into a single script, which can be added to the traditional online process (Note: New Code pulling, automatic testing, the automatic deployment process, called deployment pipeline) is executed later;
  2. Another way is to use discovery services like Consul or etcd to manage container updates, start and stop, and discover. This will be even more advanced ".

Therefore, deploying services in a production environment using Docker is not as easy as you think. I recommend that you try the method mentioned above and share your practical experience with you, which will help you use Docker together. Docker is still a very young product and a very popular product. It will certainly evolve and upgrade in the future.

From: https://www.codeschool.com/blog/2015...

Address: http://www.linuxprobe.com/docker-production-env.html


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.