This is a creation in Article, where the information may have evolved or changed.
"Editor's note" Private builds are all that developers do not rely on for any external resources (databases, MQ, file servers, etc.) to complete the build, run, and test of the application in the local environment. This engineering practice provides great convenience for research and development and delivery, and it also requires a high level of application architecture, configuration management, and resource utilization effectiveness of the local environment. Our share will explain how to build a lightweight, private build environment based on Docker and integrate it into a continuous integration system.
The application of Docker technology is more about changing the server management and operation model, and how to use Docker to see less practice in daily development work. Recently, the team has been trying to improve the private build of the development team based on Docker, and has made good efficiency improvements. This also proves that the same tool can play different ways, recording the hope to be able to have some inspiration to the industry.
First introduce the project background. Our system is an alternative investment in accounting and equity computing systems. It is based on the Client/server architecture, the basic development language is Java, and the foreground uses the awt/swing framework. This application has more than 20 years of history, the entire code base now has more than 1.4 million lines of Java code, SQL code close to 1.2 million lines.
There are currently nearly 100 developers working in more than 3 time zones around the world, with daily average code submissions at about 400 times.
Although the development time is very long, the code size is also very large, but through good coding standards, rigorous development process constraints, as well as a high degree of automation of continuous integration System, code is properly maintained, technical debt is very light. More than 1.4 million lines of Java code, Unit test coverage of more than 60%, total technical debt in 600+ days or so, this for the hundred-size development team, is a very great thing.
Another important safeguard for quality of delivery is the large-scale continuous automated regression test (acceptance test, abbreviated at). We used the Jemmy test framework, the entire test warehouse including 8000+ test cases, more than 10 PCs as Jenkins Slave, and in accordance with the test suite organization, run all 8000+ test cases every night, This enables the development team to receive a daily regression test report and corresponding code coverage report based on the latest code, providing reliable help in the timely detection and resolution of quality issues. The longest test suite has a single run time of more than 12 hours.
These run at PCs, their environment setup and the developer's machine settings are the same, in other words, when there is a practical need, any developed machine can be used as a separate full at environment to run regression testing, or the entire system integration testing. The setup and deployment of this application is called private build, and Chinese is called "privately built". I tried to search the definition of private build but I didn't find any similar practice, so here's what I'm trying to give a definition:
The so-called private building, is that the developer does not rely on external shared resources (databases, MQ, file servers, etc.), in the local environment in a unified, easy way to complete the software application construction, operation and testing of all work.
In a project involving large-scale team development collaboration, often the problem is "the environment is never set." Aside from the cost of building a complete and working application from scratch, it is difficult to maintain a separate environment that can smooth the integration of code changes and maintain stability while the code base is actively evolving. When a project team needs an independent environment to integrate with others or perform some kind of special test (such as performance, security, etc.), they often find it costly to "update" the environment, a cost that may be daunting. If you have a "unified, easy way to do all the work of building, running, and testing software applications" (or even simply to one-click), this fear will disappear without a trace.
Private building can effectively reduce the learning cost of new members in the work environment, and reduce the impact on delivery quality caused by different work environment settings within the team. Standardized private build settings mean that you can quickly build a separate test environment at a very low cost, and the importance of having such a rapid scale-up of test environment capability is self-evident when systems evolve to accumulate more and more regression test cases over time.
Of course, private building needs to be included in a complete continuous integration system to play a better role, our continuous integration concept map is as follows:
The part that links the "Local development environment" and "Regression test environment" two boxes in the middle is our private build environment. The developer's on-premises environment includes both the IDE and the ability to quickly launch a complete at environment with servers, clients, and an Oracle Express database.
To truly achieve a "one-click" Build environment, there is a high demand for application architecture, configuration and change management, and the effectiveness of local environment resource utilization. The matter can be discussed as a completely independent topic, and it will not unfold here. This system has been running stably for many years, which has made great contribution to the development of the project. But in the last few years, we have faced some new challenges.
- In order to maintain continuous delivery capacity (the health of the code), the at run must be completed the next day before the start of the work, this time window is about 4-8 hours.
- As the business grows, so does the size of the development team, and the number of code changes that are being submitted on a daily basis continues to increase. This also means more at test cases and a longer total run time. To ensure that these tests are completed in a fixed time window, more test machines are required. Previous at environments require dedicated PC desktops to run, but the resources available to run at are limited, and we want to control the PC machines that need to be "specifically" used to run at, because a dedicated environment means that special people need to be maintained.
To resolve this contradiction, our first attempt is to package the server and database into an Ubuntu VirtualBox image (because at runs require awt/swing environment, which is easier to keep in Windows). The private build topology before the change is as follows:
The change has become this:
The new way of working is to unify the settings of the main modules (server and database) of the at environment by VirtualBox virtual machine image, simplifying the cost of at environment construction. But from the actual combat effect, a PC on the start of a VirtualBox image plus the client side running AT,CPU and memory usage has been more than 70% (16G memory, dual core four thread CPU). In essence, it is still running a set of at environments on a single machine, and there is a direct correlation between the building of a subset of modules (client side) and the setting of the PC. Although we have further simplified the construction environment setup and the at startup work through standardized build commands and the introduction of vagrant, we still feel a little bit flawed. This mode is a bit stretched, especially when you need to start at Test and run your own IDE at the same time. That's when Docker comes into our sights.
Because of the lightweight nature of Docker, it is possible to run the required processes with fewer resources. If all the modules in the at environment can be run on the same PC through Docker, and a PC can support multiple at simultaneous runs, theoretically each developer's PC can be a separate set of environments without compromising the daily development effort. To achieve this goal, we also face some practical challenges:
- The company's internally certified Linux version is 3.0, and we re-created a VirtualBox image based on Linux 3.1.
- Due to the company's network security restrictions, Boot2docker can not be installed on the PC, we have to use a heavier virtualbox to load the Docker engine. The final topology became a composite way of wrapping Oracle Express, server and client into three different Docker image launches on the same Docker engine.
- When all modules are booted through Docker, client-side at needs to be driven in a Linux environment. On the one hand, Linux-based awt/swing is required to recompile the client, and on the other hand, install TightVNC in the client's Docker environment to support user interface rendering so that at can be driven.
After resolving these issues, the new topology is as follows:
In a VirtualBox image, there are three Docker images for Oracle, server, and client. Different Docker images are combined to build and start up through Docker compose. The client renders the swing UI through TightVNC, enabling it to run at tests in a Linux environment. For Docker compose, refer to Docker BLOG:HTTP://BLOG.DOCKER.COM/2014/12 ... apps/or https://docs.docker.com/compose/.
Simply put, Docker compose is the official container business process framework that can be built and run on multiple container services in accordance with the specified dependencies, with a simple yml configuration. It provides some necessary commands and parameters like links, ports, volumes. We only use the most basic to specify the boot order of three mirrors.
Oracle
Image: $machine _name: $port/oracle
Server
Image: $machine _name: $port/server
Client
Image: $machine _name: $port/client
Links
-Oracle
-Server
Env_file:
-./$app. env
The starting order here is Oracle, then server, and finally client. In practice, the private building environment based on Docker, a PC running a set of at environment, CPU and memory utilization level is probably less than 40%, while running two at environment, CPU and memory utilization level is about 80%. Developers can do both development and run at no interference on a single machine. More importantly, all at modules are encapsulated in a virtual machine environment, becoming a fully standardized "private build" environment, and our expectations are more fully fulfilled.
Finally, we set up our Docker-based private build environment to build the process:
- Make three Docker images that correspond to Oracle, client, and server respectively. Client-side mirroring includes TIGHTVNC server providing desktop capacity in Linux environments;
- Install Docker image in VirtualBox virtual machine;
- Distribute the VirtualBox image to the development team;
- Register the VirtualBox virtual machine as a Jenkins Slave;
- Each virtual machine image is started on two sets of at environments;
The topology is as follows:
This allows each developer's PC to provide a separate test environment for developers to debug and test at work. At idle time, two sets of at can be run as a regression test environment. And each time you start at, the private build environment is very clean and avoids the problem of at-run instability caused by environmental differences.
The next step is to build our own Docker Registry and to manage and scale the Docker process through Docker swarm. The goal structure in mind is this:
Looking back at the experience of the whole experiment, frankly I think the technical challenges are not as big as they think. Instead of breaking the comfort zone or the invisible shackles of working in large companies, the first step to finding new ways to solve problems is harder to cross. Echoing the beginning, this example is a common engineering problem, and many people are likely to meet similar problems in their daily work. Our practice shows that with a suitable cut-in angle, Docker also becomes a very grounded solution in this size and micro scenario.
Q&a
Q: Does Windows 2016 native support Docker run after Virtualbox/ubuntu this layer can be skipped?
A: We use the vb/ubuntu reason is not because the Windows native does not support, in fact Boot2docker can be used on the Win7, the main reason is that the company machine to install non-certified software process does not have permissions, locally certified virtual machine only VB ...
What is the difference between Q:oracle Express and Oracle Enterprise Edition? Can I develop a test environment with Oracle Express, Oracle Enterprise Edition for production environments?
A:oracle Express and Oracle Enterprise differ more, and the private build environment uses Express to limit resource usage. But the integration and testing environment is enterprise.
Q: What operating system is selected to run Docker?
A:ubuntu LTS.
Q: What do I do with Oracle database storage in Docker?
A: Through the volume hanging up, because it is a small test library, so the amount of data is small.
Q: Do you always use virtual box, do you have any attempt to deploy under VMware? Why consider the former? Thank you!
A: There is no use of VMware, the key because the company is certified virtual box, and is a specific version of. VMware or Virtual box is not a big difference for our deployment. Another consideration is the desire to have a loose coupling (not strongly dependent on a particular product) development environment, and VirtualBox Open source, free, enough.
Q: How can I share the slave between Jenkins Master and Jenkins and the various Docker containers?
A:jenkins has a Docker plug-in that can do the scheduling coordination between the three. But we do not use, because our practice is relatively special, so it is written by their own shell script.
q:oracle on the resource requirements, you use Docker to do performance, there is comparative data?
A: We did not test it deliberately, but after a while, Docker-based testing is faster than Windows desktop-based testing. A little bit of quantification, with VirtualBox, a set of private environments run at,16g memory, 2 core 4 thread development machine, memory and CPU occupancy rate of nearly 80%. With Docker, running a private environment for at, memory and CPU usage of about 40%, basically can save half of the resources.
Q:vagrant used in the development environment, Docker is used in production environment, which pit most, how do you solve?
A: Now Docker is not used in production environments, just to assist in development and regression testing. Technology itself is not a lot of pits, because we use the way are very direct, basically check Docker docs plus Google can solve.
Q: Using a container to set up an at environment, is the code mounted inside the container via volume? The output of the run, such as the test results, how is the compiled binary delivered?
A: Yes, the code is mounted via volume. Our output is a text file containing the results of the test, as well as the log; the text file is automatically uploaded to a shared location, and the log is collected via Logstash and sent to Elasticsearch. As for binary, our at outputs do not include binary, and we also have build jobs that produce binary, which we upload to the central share location.
Q: Does the database in the development test environment be built on a full or incremental scale each time? How is the benchmark data managed and imported?
A: Every time is a full-scale build. The benchmark data comes from the production environment, cleaning up the sensitive data to get a lightweight test library and then applying the latest code to our daily test library. This test library was imported into the private build environment with Oracle's Export/import on Oracle Enterprise Editions, ensuring that all of our test environments are consistent and clean. The benchmark data is not exported daily, and is only done once a week to two weeks.
========================================================
The above content is organized according to the February 2, 2016 night group sharing content. Share people
Zhuo, research and development manager, Bank of America, vice President, PhD in Computer Engineering. Over 10 years of experience in IT systems development and technical team management. A strong curiosity about novelty, continuous integration, devops, and any other tools and methods to help me and my team to improve my abilities .。 Dockone Weekly will organize the technology to share, welcome interested students add: Liyingjiesz, into group participation, you want to listen to the topic can give us a message.