1. Background
Agile development has been popular for a long time, and now more and more enterprises are beginning to practice agile development advocated by people-centric, iterative, progressive development concept. The first goal of introducing Docker technology in this scenario is to use the virtualization provided by Docker to create a reusable development environment for the development team, allowing the development environment to be shared with all development members of the project through image to simplify the development environment. However, before Docker technology has been available, such as Vagrant's development environment distribution technology, software developers can create similar requirements of the environment configuration process. So in the context of development, the advantages of Docker technology are not very good to play out. I think the advantage of Docker is that it simplifies the building process of CI (continuous Integration), CD (continuous delivery), and allows developers to devote more effort to development.
Each company has its own development technology stack, and we need to continuously improve it in combination with the actual situation to optimize our own build process. When we are ready to take the first step, we first have to establish a blueprint to be confident, so that the next thing will soon be realized.
This timing diagram outlines all aspects of the current agile development process. In combination with the Blueprint framework given in the above sequence diagram, the focus of this paper is to explain the practical experience of introducing Docker technology to each link.
2. Create a continuous Release team
When the development team introduced Docker technology, the biggest problem was that there was no industry standard to follow. We often use best practices as a slogan and introduce a variety of toolchain, leading to no focus on using Docker. It involves Docker sizing and spending a lot of time on tool learning rather than choosing the right tools to build a sustainable product development team. Based on this scenario, we can use the "easy-to-use" principle as a criterion to introduce the Docker Technology tool selection Reference. In the process of introducing Docker technology, the first thing that the development team needs to solve is to let team members master the use of Docker command line as soon as possible. After familiar with the Docker command line, the team needs to address several key issues as follows:
1) Base image selection, such as Phusion-baseimage
2) Configure the selection of tools to manage Docker images, such as Ansible, Chef, Puppet
3) host system selection, such as CoreOS, Atomic, Ubuntu
base Image includes the smallest collection of operating system command lines and class libraries, and once enabled, all apps need to create an app image on its basis. Ubuntu, as the default version of the official use, is the most available version, but the system has not been optimized to consider using a third-party version, such as Phusion-baseimage. For the selection of the Rhel, CentOS branch base Image, the security framework SELinux use, block-level storage file system Devicemapper and other technologies, these features are not common with the Ubuntu branch. It is also important to note that the operating system branch is different, the method of its cutting system is completely different, so everyone in the choice of operating system must be cautious.
Configuration Management Docker The Mirrored tool is primarily used for configuration management that creates an image based on Dockerfile. We need to combine the status of the development team and choose a tool familiar to the team as a common tool. Configuration tool has a lot of options, including Ansible as a rising star, in the use of configuration management experience is very simple and easy to use, recommended for reference.
Host The host system is the operating environment for the Docker background process. From a development point of view, it is a normal single OS system, we only deploy the Docker background process and cluster tools, so we want the host host system to be less expensive the better. The host system recommended here is CoreOS, which is currently the least expensive hosting system. In addition, Red Hat's open source atomic host system, there are based on Fedora, CentOS, Rhel Multiple versions of the branch selection, but also a good candidate. Another case is to choose the minimum installed operating system and customize the host system yourself. If your team has this capability, consider customizing the system yourself.
3. Continuous integration of building systems
When the development team submits the code to the GIT application repository, I'm sure all the developers want a system that will help them deploy the application to the application server to save unnecessary labor costs. However, complex application deployment scenarios make this idea easy to implement.
First, we need to have a docker-enabled build system, which is recommended by Jenkins. Its main features are open source projects, easy to customize, easy to use. Jenkins can easily install a variety of third-party plug-ins, making it easy to integrate third-party applications quickly and easily.
With the job trigger mechanism of the Jenkins system, we can easily create various types of integration job use cases. However, the lack of uniform standard job use case usage can lead to confusion in the use of project job cases and difficult to manage maintenance. This also makes it impossible for development teams to take advantage of the benefits of a well-integrated system, which is not the result of our expectations. Therefore, the agile practice approach presents a concept deploymentpipeline (pipeline deployment) that can be delivered continuously. With Docker technology, we can easily understand and implement this approach.
Jenkins's pipeline deployment visualizes the deployment process as a long pipeline, with one node per interval, that is, job, to complete the job before entering the next session. The form is as follows:
Image Source:google Image Search
As you can see, every panel in the board after introducing Docker technology, it is possible to use Docker to modularize the task and then make a targeted image to run the desired task. The creation of each task image can be done in the developer's own environment, and a similar scenario can be consulted:
Image Source:google Image Search
So, after using Docker, the modularity of the task is naturally defined. Through the pipeline diagram, you can see the execution time for each step. Developers can also define stringent performance criteria for each task for the needs of the task, which has been used as a reference base for subsequent testing efforts.
4. The best release environment
The application is tested and we need to publish it to the test environment and production environment. How to use Docker more rationally in this phase is also a challenge, and the development team needs to consider how to build a scalable distribution environment. In fact, this environment is a private, Docker-based cloud, and what we might expect is a PAAs cloud service that provides API interfaces. To build this PAAs service, here are a few of the most popular tools you can use to customize your enterprise-private PAAs services.
1) Apache Mesos + marathon
Apache Mesos System is a set of resource management scheduling cluster system, the production environment uses it can realize the application cluster. This system is an Apache open source project launched by Twitter. In this cluster system, we can use zookeeper to open 3 Mesos Master Services, when 3 Mesos master through Zookeeper Exchange information will elect leader service, then sent to the other two slave Messos Requests on Master are forwarded to the Messos Master leader service. Mesos slave server will send memory, storage space and CPU resource information to Messos master when it is turned on. Mesos is a framework that is designed to perform data analysis only when it is used for job execution. It does not run a long-running service such as Web service Nginx, so we need to use marathon to support this requirement. Marathon has its own rest API, we can create the following configuration file Docker.json:
{" container": { "type": "Docker", "Docker": { "image": "Libmesos/ubuntu" } }, "id": " Ubuntu ", " instances ":" 1 ", " CPUs ":" 0.5 ", " mem ":" + ", " URIs ": [], " cmd ":" While sleep 10; Do date-u +%t; Done "}
and then call
Curl-x post-h "Content-type:application/json" Http://<master>:8080/v2/apps [email protected]
We can create a Web service on the Mesos cluster. For the specific case of marathon, you can refer to the official case.
Image Source:google Image Search
2) Google Kubernetes
Google's container cluster management tool, which presents two concepts:
- Pods, each pod is a collection of containers and deployed on the same host, sharing IP addresses and storage spaces, such as Apache,redis, into a set of container collections.
- Labels provides a service tag that facilitates the invocation of collaboration between Pod containers.
With an introduction to the official architecture design documentation, you can learn more about the design ideas of each component. This is the only open-source container solution that has been launched on the basis of the experience of deployment in the production environment, and can foresee the future as the industry reference standard for container management systems.
Image Source:google Image Search
3) Panamax
In front of a dazzling array of cluster management tools, how to manage a single Docker container is also a problem to solve. Because Docker consumes less memory, it's no surprise that you deploy hundreds or thousands of containers on a stand-alone server. Panamax provides a user-friendly web management interface to install software that makes deployment easier. Panamax also offers a rich container template that makes it possible to create services online. For example, to Digitalocean to apply for a host, install a set of Panamax boot for the background service. Then, through the Panamax Web interface to install Nginx, Mysql, Redis and other service images, so as to quickly build a production environment of the application scenario. All of the operations are done on the Web interface, and developers only need to focus on the development itself.
5. Conclusion
Docker's integrated deployment solution is a flexible and simple set of toolset solutions. It overcomes the complex and difficult dilemmas of previous cluster tools and deploys software applications using the concept of a unified Docker application container. Through the introduction of Docker technology, the development team in the face of complex production environment, can be combined with the actual situation of their own team, to customize the appropriate software release plan for their own infrastructure.
The way Docker (iv): Docker's integration test deployment