If you use Dockerfile to define how to generate your container, Fig can help you define the runtime framework of the entire container. Fig encapsulates "add volume", "link container", and "ing port" operations into a YAML description file; the operation described in CodeTV is simplified as follows in Fig:
I have defined two containers in YAML: web and db. The Container web generates the Dockerfile from the current folder, exposes port 80, and links it to the container db. The container db generates a PostgreSQL image from the DockerHub and exposes port 5432. With this YAML configuration file, fig can generate containers with the following command, and then start them according to the configuration file's intention.
Fig will first start the linked container db, so that the container web will not be connected to the database. The-d parameter indicates that the container will be started in the later running mode. This ensures that the container is still running after the user logs out of the operating system. You can log on to the official website of Fig to obtain more configuration information.
DeploymentNow we can easily start a Docker container, but how can we deploy a Docker container in the production environment? If both Fig and Docker are installed in the production environment, all we need to do is clone the previous container image and run the same fig command to start the container. However, the question is how to update the online running containers.
Unfortunately, Fig can start a container very elegantly, but it is not good at updating and restarting the service. Of course, you can pull the updates of the program in the code repository, and then re-run the preceding fig command to achieve this goal. However, when the container updates the code and restarts, you cannot provide external services. To cope with this situation, we use the native Docker command and introduce Nginx for reverse proxy (Note: Soft load) to solve this problem.
First, modify the port listened to by the container because Nginx needs to listen to port 80. Let's modify it as follows:
web:build:.ports:-"8080:80"links:-db...
By modifying the Fig configuration file, our web container is changed to listening to port 8080. Nginx must be configured with load balancing between ports 8080 and 8081. Therefore, the Nginx configuration is as follows:
upstreamdocker{server127.0.0.1:8080;server127.0.0.1:8081;}server{listen80;location/{proxy_passhttp://docker;}}
After Nginx is restarted, Nginx starts to implement a reverse proxy (soft load) between port 8080 and port 8081. When any of the ports fails, Nginx automatically forwards requests to the other, after the failure, the port is restored. In this way, we can pull updates from Git, and then run the following command to start it:
$dockerrun-d--nameweb1--linkcodetvjournal_db_1:db-p8081:80codetvjournal_web:latest
After we confirm that the web1 container of port 8081 is started and the service is normal, we can stop the service of port 8080 and start to update the service for port 8080. I recommend that you use native docker commands instead of Fig to do this, because it can avoid interfering with running db containers (note: the author may refer to the previously written YAML, including the configuration for starting the db container)
We can use the above method to create many web containers, as long as the ports they occupy are different from the container name. At the same time, we can use Nginx to load their front-end to achieve non-dropped programs.
AutomationThen again, how can we automate the above update process? There are two ways to achieve:
- Encapsulate operations such as container update, start/stop, and switch into a single script, which can be added to the traditional online process (Note: New Code pulling, automatic testing, the automatic deployment process, called deployment pipeline) is executed later;
- Another way is to use discovery services like Consul or etcd to manage container updates, start and stop, and discover. This will be even more advanced ".
Therefore, deploying services in a production environment using Docker is not as easy as you think. I recommend that you try the method mentioned above and share your practical experience with you, which will help you use Docker together. Docker is still a very young product and a very popular product. It will certainly evolve and upgrade in the future.
From: https://www.codeschool.com/blog/2015...
Address: http://www.linuxprobe.com/docker-production-env.html