Dockone WeChat Share (82): CI&CD practice based on Docker technology

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.
"Editor's note" Docker technology is widely used to package software with its dependent environment and deliver it in a mirrored manner, allowing the software to run in a "standard environment". This feature can be applied to continuous integration, enabling the continuous delivery of native support container cloud platforms. This article outlines the basic workflow of CI&CD, the overall software framework and the implementation principles.

Background overview

Continuous integration is a software development practice in which team members often integrate their work, and each integration needs to be validated through automated builds, including compilation, release, and automated testing, to identify integration errors early. Continuous delivery refers to the frequent delivery of a new version of the software to a quality team or user for review, if the review is passed and published to the production environment.

Docker is a LXC-based container engine that has been very hot in the open source community since it was open source in 2013 because of its ease of use and high portability. Docker packages the software with its dependent environment, delivers it in a mirrored manner, and lets the software run in a "standard environment", which is very much in line with cloud computing requirements. Major IT giants have followed, Docker container technology startups are springing up, Docker has created a new container cloud industry.

Docker technology is widely used. For example, use its isolation feature to provide a lightweight, independent sandbox environment for integration testing for development and testing. Accelerate local development and build processes to make them more efficient and lightweight, and developers can build, run, and share containers, easily submit to test environments, and eventually into production environments. Caicloud Circle is a container-native, continuous-integration, continuous delivery SaaS offering built on the Docker feature.

CICD Basic Process

Circle provides a rich rest API for Web application invocation;

When the VCS is associated with the circle service through the API, the commit action on the VCS triggers the circle build;

VCS provider components from the VCS pull to replace the code base source;

Based on the source code in the configuration file CI configuration to start the required CI micro-service container for integration testing;

After the integration test passes, enter the pre build stage to compile the executable file in the specified compilation environment container;

The build phase copies the executable files into the specified runtime environment container and pushes them into the registry in a mirror.

The post build phase will do some post-publication associated operations, such as pushing a static resource file that is dependent on the run to the CDN;

After the image is published, it can be automatically deployed to Caicloud, Kubernetes, Mesos, Swarm and other PAAs platforms.

The build process log can be pulled through logserver;

Build end build results can be emailed to notify users.

Overall framework

The circle overall framework looks like this. Circle runs in the container, with the Kafka-zookeeper container cluster communication push pull build process log, and the underlying data information is stored in the MONGO Database container cluster. Inside Circle: The API Swagger component provides an online Circle API description Help document; The API server component receives user HTTP requests, generates asynchronous pending events into the event queue of the Async Event Manager component, and answers user requests The Async Event Manager component, after detecting the creation of a new pending event, invokes the tool functions provided by the CI&CD component, based on the event-specific action type, to assemble the assembly into a work pipeline, and detects that the event is completed and updates the document record in the database; Run a group of Docker outside the container Daemon queues, each work pipeline independently uses a Docker Daemon for user and event isolation, and the CD&CD component dispatches an idle Docker Daemon execution Pipeline task from the Docker Daemon queue. The process log is pushed to the topic specified by the Kafka, and the log server component provides the user with a websocket server to get the log, which is pushed from the Kafka pull real-time log to the user.

Circle can also be a multi-node distributed deployment, as shown in the deployment diagram below. Each cube represents a node that uses the Haproxy reverse proxy for load balancing and SSL data encryption, and the distribution API requests to each circle node.

Implementation principle

In the Web page, by using the OAuth user authorization to pull the user VCs repository list, select establish the service associated with the specified repository. If the VCS using Git can also invoke the Git API to build Webhook, call the Circle API to trigger CI&CD when the repository generates events such as commit, tag, pull request, and so on.

The specific steps of the Circle CI&CD are defined in the code base caicloud.yml file. Divided into integration, Pre_build, build, Post_build, deploy five paragraphs. After the circle pulls the file from the code base, parses the Caicloud.yml file and performs the specific action according to the configuration.

The integration segment performs integration testing, and the Yaml file defines the language image name used by the compiled executable, the environment variables to run, the start instruction (which can run some test scripts, test the application), and the MicroServices container configuration that the integration test relies on. Start the dependent MicroServices container by using the Docker remote API call allocation Docker Daemon, then launch the integration container to compile the file, execute the command line, complete the container exit, and return to the command line execution result code to indicate the integration result.
#integration段
Integration:
image:golang:v1.5.3 #镜像名
Environment: #环境变量
-Key = value
Commands: #容器运行后依次执行的命令
-LS
-pwd
Services: #依赖的服务
Postgres: #服务名
image:postgres:9.4.5 #服务镜像
Enviroment: #环境变量
-Key = value
Commands: #服务容器启动后依次执行的命令
-CMD1
-CMD2

Prebuild section performs compilation work, you can use Dockerfile or you can use the K-v value of Yaml. A Yaml file can define the Dockerfile path file name that the build uses, the underlying image that the Prebuild container launches, the environment variable, the startup command line, and the executable folder or file name that needs to be output when the edit is finished, if dockerfile and the container configuration are defined at the same time, Prefer to use Dockerfile. CI&CD first parse the specified dockerfile or yaml file contents, get the container start configuration, according to the configuration call container, execute command line to compile the executable file and exit the container, if the compilation is successful, use the Docker copy API to copy out the specified output file.
#prebuild段
#若context_dir和dockerfile_name至少配置一项, the image is first built with Dockerfile
#若context_dir和dockerfile_name均未配置, the build image is configured with the following container
Pre_build:
Context_dir:prebuild #Dockerfile路径, build dependent files cannot be outside of this directory, default to repo root directory
Dockerfile_name:dockerfile_prebuild #Dockerfile文件名, default is Dockerfile
image:golang:v1.5.3 #prebuild镜像名
Volumes: #挂载数据卷
- .:/ Root #挂载repo文件到指定目录
Environment: #环境变量
-Key = value
Commands: #容器运行后依次执行的命令
-LS
-pwd
Outputs: #输出文件, if the generated output file is under the working path, you can default
-File1
-Dir2

Builds the publishing image in the build segment based on the specified dockerfile. In this dockerfile, you can add executable files that are output in the Prebuild segment to the publishing environment. The prebuild and build steps enable the separation of the software compilation environment from the software operating environment. After the build is complete, push mirrors to the specified mirror warehouse.
#build段
#context_dir和dockerfile_name至少配置一项
Build
Context_dir:. #Dockerfile路径, build dependent files cannot be outside of this directory, default to repo root directory
Dockerfile_name:dockerfile_publish #Dockerfile文件名, default is Dockerfile

The Post build section can define some of the associated operations after the image is published. For example, a user can make a mirror that contains the necessary static resources that the program is dependent on, start the container with that mirror in the post build phase, and push it into the CDN by executing the command line.
#post_build段
#若context_dir和dockerfile_name至少配置一项, the image is first built with Dockerfile
#若context_dir和dockerfile_name均未配置, the build image is configured with the following container
Post_build:
Context_dir:prebuild #Dockerfile路径, build dependent files cannot be outside of this directory, default to repo root directory
Dockerfile_name:dockerfile_postbuild #Dockerfile文件名, default is Dockerfile
image:golang:v1.5.3 #镜像名
Environment: #环境变量
-Key = value
Commands: #容器运行后依次执行的命令
-LS
-pwd

When the release is complete, enter the deployment phase. The user preconfigured the deployment scenario in the Web interface, specifying the cluster partition application and container name, Circle invokes the cluster-provided application Deployment API, deploys the built-in image into the application, and queries the deployment status if the failure is rolled back to the previous successful mirror.

Some highlights feature

Multi-docker daemon build for user event isolation

As the server does not process only a single request at the same time, event tasks that run concurrently on the same node may affect each other. Circle uses multi-docker daemon isolation for user event isolation. Run a set of Docker daemon queues on a node, schedule assignments for single event tasks, clean up leftover containers and mirrors after use, and make sure the build environment is clean and tidy. The number of queue elements is limited, and when there is no idle Docker daemon, the event task enters a queued wait state and waits more than 2 hours for the event task timeout to fail. When the user needs to cancel the build, only kill is performing the Docker Daemon of the current build task, and then restart a new Docker Daemon to join the idle queue.

Micro-Service multi-module joint release

MicroServices are getting more and more attention in blogs, social media discussion groups, and conference presentations. A microservices architecture is a specific software application design approach-a suite of packages that split large software into multiple, individually deployable services. To manage the dependency management of the MicroServices multi-module, the Federated Integration publishes multiple modules, and Circle implements the Federated publishing function.

First, create multiple services associated with a single module code base code, and then set up a tree-dependent relationship between multiple services through UI drag-and-drop, and circle transforms the tree relationship into a linear release sequence store. When a user clicks a syndication release, Circle launches multiple CI&CD pipelines to test and build multiple modules separately (integration+prebuild+build+post build operation). After all the module builds are complete, the storage-linear release sequence is deployed to the container cluster application (deploy operation).

Mirrored security scan

There are two common methods of file analysis: Static analysis and dynamic analysis. We use static analysis to review the mirrored file system. The vulnerability is obtained from the Common Vulnerability Disclosure (CVE) database of the Linux operating system.

Future work prospects

Operation and Maintenance

At present, the multi-node distributed deployment method used in the circle actual deployment, each node uses Docker compose to run each container, upgrade operation dimension is troublesome, need ssh to connect to the command line operation on each node. We plan to deploy circle to the Caicloud Claas technology stack in the near future, using Kubernetes's powerful operational capabilities to improve circle productivity. There may be some modifications to the existing framework, with the following ideas:

Each cube represents a pod. Use NIGNX to do reverse proxy, TLS encryption, circle-master in Api-server components to the Web to provide API services, Log-server components provide real-time log service, when the call API build, will go to create a new build task sent to the worker Manager component, the Worker Manager logs the task information into ETCD and creates a new Circle-worker pod to perform the task; Circle-worker has previous CI&CD components, Start the container execution integration, prebuild, build, Post build and deploy, the intermediate process log pushes to Kafka, the task status is synchronized to the ETCD, the pod exits after the task is finished, and the components need persisted information to write to MONGO.

Concurrent

CI&CD tasks require a lot of system resources, Circle server resources are limited, how can I support a lot of concurrent build tasks? Our idea is to allow users to add their own work nodes to the circle cluster, the circle to dispatch the CI&CD task management logic, the user's own work node to host the execution of the task of the operational load. The CI&CD pipeline is split up and packaged into a circle-worker image based on the Docker in Docker image. After the user installs and runs Docker on their own machine, the Docker remote API address and the machine resource configuration are communicated to the circle,circle to verify the validity of the machine and pull the machine into the cluster. When the user has a CI&CD task that needs to be performed, the dispatch user node runs the Circle-worker container to obtain the task information from circle and executes it, returning the result after completion.

Qa

Q: Thanks for sharing, may I ask you to choose Circle instead of Jenkins? Personal feeling Jenkins is still relatively powerful, especially its rich plugin?

A: Our aim is to develop a CI&CD platform based on container cloud native, so as to better support the future deployment and application management of docking container cluster.
Q: Thanks to the sharing, because CI will involve different (development, testing, etc.) environment, then for different environments of the automatic compilation and deployment may be different processing mechanism, can be defined and identified by the same yml?

A: Yes, the whole CI will involve different environments, it is because of environmental problems, but also we adopt the container technology one of the reasons. To deploy the same container image in different environments, there will be different environment special configuration, such as database address, this time need external to solve the problem of configuration management, our entire container solution contains the configuration center, the environment variables and file mount way to solve such problems.
Q: If I use the environment image for compiling, will the final compiled image size be large because of the large number of dependent packages?

A: It is not clear what you mean by the image of the environment, the final compilation of the resulting image size is basically the programming language used by the application, the dependent middleware, and other locally dependent libraries or files.
Q: Ask if the code package is placed in the mirror or on the host, how to solve the problem of slow image distribution?

A: The code is downloaded to the host and then mounted to the container to compile the executable file, and then the executable file is copied from the container and packaged into the publication environment image. We have a proxy registry running on the host to speed up the pull image. The historical image that the host pulls over is cached in the local proxy Registry, which is pulled from the local cache the next time it is pulled.
The above content is organized according to the September 13, 2016 night group sharing content. Share people Chen, master of Software Engineering, Shanghai Jiaotong University, joined Caicloud in April 2016, senior software engineer, director of CI&CD and other SaaS product development。 Dockone Weekly will organize the technology to share, welcome interested students add: Liyingjiesz, into group participation, you want to listen to the topic or want to share the topic can give us a message.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.