A small white test environment Docker Road

Source: Internet
Author: User
Tags file copy java web git clone docker compose docker run docker swarm

This article from the NetEase cloud community

Leaves

Learning to set up a test environment for Docker has been intermittent for three months, hoping to record the process. As the saying goes, summing up the past, looking to the future of the article is simple, but also hope that the big God passing pat.

In accordance with international practice, let's talk about the background:

At present, the project team I am in is constantly expanding and developing, so the quality assurance dimension also needs to be expanded continuously. However, the development of a variety of quality assurance needs a number of test environment support, there is only a set of test environment in the project team, according to the traditional method of manual build test environment time and effort, what method can quickly build the environment? Of course, it's Docker from the fire in recent years. But I am docker small white, before just simply read a few Docker introductory posts, go to official online according to tutorial knocked over the command, but the total feeling is on paper, one to combat links, still can't start.

China's richest man, Wang Jianlin, said: "Set a small target first." Our project in addition to Java Web application is Java app, Java Web application is the white is tomcat, used to manually deployed before, it does not look too difficult, from this beginning, first use Docker deployment of a project Tomcat application. Docker knowledge is 0 basis, the boss recommended a book called "The first Docker book."

This book is easy to understand, suitable for my small white, coarse rough time after the first 4 chapters, I feel I can go on the road.

The application module deployment of the test environment is deployed on the NDP platform, and it is simple to understand how the next NDP platform deploys the Web application, which is to pull the code from git, compile and package, and find a cloud host with JDK and Tomcat installed on the cloud host. Then put the packaged code into tomcat, set the lower port number, and start up just fine. How do you deploy it with Docker? Read the Docker book know, in fact, a Docker container is the equivalent of a cloud host, our cloud host is a Linux system, pull a Linux system image, start the image of the container, I installed a JDK and Tomcat, this is not the same as our cloud host environment ~

Here the idea is much clearer, the first step first and the cloud host environment like the Docker container, after searching the web, found that there is a ready-made Tomcat image, Tomcat itself is dependent on the JDK, but also based on the Linux environment, the first step is done immediately, the speed of the leverage drops. The next step is to pull the code, compile the package, and then put it in and start it. As a matter of the first question, the code is privileged, not casually accessible. Recalling the previous pull code on the cloud host, is to generate a pair of SSH keys on this cloud host, and then upload the public key to the Git lab, so that you can get the permission to pull the replacement code. All say practice is the only standard to test the truth, hurriedly in the newly launched Docker container to try a, well, can make sense! But this is just a Docker container ah, if I come back a few Docker containers, each container to generate a pair of SSH keys, and then upload, not exhausted. What do we do? Marxist philosophy says that what is the nature of Git's ability to identify itself through the phenomenon? is the matching of the private key and the public key! The public key is on the GIT server, so I have the corresponding private key locally as long as I can create the container, I put a git existing public key to match the private key in, so that the container comes with git permissions.

After the code is pulled down, the next step is to compile the package, so let's look at how the NDP platform deploys the Web app for our test environment. Look, on the NDP platform to find this Web application of a build.xml, gee, this is not the ant tool execution script, carefully read, sure enough is a compiled packaging script, the key step is to use MVN clean install to compile packaging. OK, install the Ant and Maven tools in the container you just started, and then use the ant command to execute the build.xml.

After executing the ant command, it was discovered that a compressed folder was generated, which was mainly generated after the package was compiled, such as:

So where should these compiled and packaged things be? Once again, looking at the directory hierarchy of the NDP deployed Tomcat application, my heart reckoned me that the directory hierarchy in the Docker container should have no problem with NDP. After comparison, compressed file content and test environment Tomcat application Webroot file content is the same, Roar ~

In addition, the other directory levels are not the same place listed, probably the following several:

Roughly read, Tomcat is a script file used to start the Tomcat service, which calls the./default/tomcat, init-functions, and Ratatelogs files, and then copies the required files. And make sure all the call paths in the Tomcat script are correct.

Additionally, you need to modify the Server.xml to specify the Docbase address as the absolute path to compressed

After you've got the same level of content on both sides, it's time to witness the miracle, run the start command:

./tomcat Start

It's a good omen to see the log found to be up and running without abnormal error. Try again with your browser to see if you can access it successfully, seeing the expected Web page open: The first Web application module to be deployed in Docker is now complete.

Given that there are several different Web modules in the project, and that they depend on the same underlying environment, they intend to build a base image, put the same configuration files needed in the underlying image, and then write a dockerfile for the respective module's compilation and packaging process. This does not have to be in the container praying holding hands knock command to compile the package, Dockerfile pseudo-code as follows:

From Tomcat base image git clone code copy build.xml, config file wait for path specified in container run Build.xml, generate package file copy package file to destination Filestart start run module

Once the web app is done, the Java app module is next. Look at the environment dependencies, only need to install a JDK environment, and then NDP on the same Java app build script Build.xml, and then research the NPD deployment of the Java app directory structure, copy of the copy, the modification of the modified, Tiger to get it again, also started successfully.

The next step is to optimize the two base images of Tomcat and Java app, and then write the dockerfile of the respective modules.

Each module has its own dockerfile so that it can quickly build the module image and start the deployment. At this point, the application module that leverages the Docker deployment project is complete.

Thanks to the above-mentioned process to provide a great help in the collection of beauty and talent in a graceful classmate ~

--------------------

The pits that have been trampled:

1, in the execution of the Build.xml script to build, encountered a similar error as follows:

The reason is that the jar package is required because it cannot be found in the MAVEN remote repository, first check that after installing Maven in the Docker container, Remember to set Settings.xml to Hang maven repository, if you still encounter such an error, may be Hang maven Warehouse does not have the jar package, or because of other problems such as network, can not download the jar package. Basically the common jar package can be downloaded successfully, there are some dependencies based on the jar package can not be downloaded to, if you have these jar in the local Maven repository, can be copied into the container's Maven repository, or in the container to pull the code of the interdependent module, through MVM clean The install command packs it into the container's Maven repository.

2, after starting the application in the container, encountered the inability to connect to the Redis problem:

The Redis address in the code is the private IP address of the cloud host, the host of the container and the Redis under different tenants, so it cannot be accessed through the private IP, the host of the container and Redis belong to a room, which can be accessed through the computer room IP. Consult with development to change the IP address of the dependent service in the code to the room address.

3. After copying the private key of the Git hub in a Docker container, the following occurs when executing the git clone command:

This hint is already clear,. Ssh/id_rsa This private key file permission too open, modify the permissions of the private key: chmod 0600 Id_rsa after the git clone code.

--------------------

However, soon we found that for each application module write Dockerfile this way, one is to build a mirror too much, each module to build a mirror, and then start the container, the mirror is relatively large, very occupied space, and second, because of the particularity of our project, Some application modules in the building need to rely on other project jar, at that time, in order to save the diagram of the dependent jar directly into the base image, when the dependent jar changes, my base image will be re-set, all the other modules of the mirror to be re-made, this is the life ah. It seems that this is not a permanent solution, but there is a need for another.

Through the previous exploration, we know that the NDP deployment model is a beacon, before the NDP deployment script divert come over, that NDP platform is how to solve the problem of packaging dependencies between different modules? After a cursory study, it was found that NDP was packaged in one place, and then the packaged compressed files were distributed to other cloud hosts for deployment. In this way, we also find a container for unified compilation packaging, and then distributed to different containers for deployment.

In this way, we only need three base images, one is to compile the packaged image, just install jdk,maven,ant and other compiled packaging projects rely on the tools, one is the Tomcat mirror, one is the Java environment image, and then write a script, pseudo-code as follows:

Get compressed File #对应模块的打包文件get Config file #对应模块的部署配置文件start module #启动运行模块

How do I get the corresponding package file? The wget command can download files from a remote server, provided the network between the different containers needs to be interoperable. We can manually create a docker network, and then manually join the different containers to the network, so that all the containers are in the same network, there should be no network problems. Get deployment configuration file we used git pull to maintain the deployment profile for all modules on git.

This way, when you deploy a Web module, you start the container with a tomcat image and run the script at the same time, and when you deploy the Java app module, you start the container with a Java app image and run the script at the same time, so that you don't have to write Dockerfile images for each module. It also saves a lot of space for building mirrors.

But while the space for building mirrors is saved, the space that the container runs is still to be provided. The basic configuration of a cloud host is 4 core CPU,8GB memory, and the project module up to 20, all modules on a cloud host can be unbearable. Be sure to engage multiple cloud hosts as a cluster and deploy the containers to this cluster. The container Orchestration tool is used, and the Orchestration tool is responsible for the following:

    • Choose the machine that is most suitable for deploying containers, such as machines with the most free resources

    • Machine failure can automatically deploy the container on the faulty machine to other nodes.

    • If the cluster adds a new machine, rebalance the container's allocation.

    • If the container fails, restart it.

    • ...

Docker itself has built-in container orchestration, called Docker swarm mode. There is a lot of information about Docker swarm mode on the Internet, which is not explained here, the following is the general process:

    1. Create a network bridge: Create a Docker_gwbridge bridge on each cloud host, note that you must first create a network bridge and then create a swarm cluster

    2. Create a cluster: Specify a cloud host as the manager, initialize the Swarm

    3. Join the cluster: Run the Docker swarm join command on another cloud host to join the Swarm cluster

    4. Create services: Create a service using Docker service creation, like the Docker Run command, which can be created to run the script directly and perform the deployment process

When creating a service using the Docker Service create command, you need to specify a large number of parameters, each time you have to knock a long command, you can use Docker compose Yaml template, similar to the following:

Version: "3.2" Services:compile:image:dockercloud/hello-world deploy:replicas:1 Placement:con Straints:-Node.hostname==docker-test Ports:-"8080:8080" networks:-Overlay_netnetworks:ove Rlay_net:driver:overlay

Reference article: "Docker Compose configuration file Details"

At this point, we have solved the problem of the interdependence of each module in the process of compiling and packaging, implemented the method of cluster deployment of the application module through Docker swarm mode, and the next goal is to split the other basic services that these application modules depend on, such as database, Redis, Zookeeper and so on, that's a complete set of independent test environments.

netease Cloud Container Service provides users with a server-free container, allowing enterprises to quickly deploy business, easy operation and maintenance services.

NetEase Cloud Gift Pack: Www.163yun.com/gift

This article is from the NetEase Cloud community, published by the author leaves authorization.

Related articles:
"Recommended" 6 Internet technology best-sellers free (data analysis, deep learning, programming language)!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.