Dockone WeChat Share (108): CI workflows based on Jenkins and Kubernetes

Source: Internet
Author: User
Tags using git jenkins ci kubernetes deployment groovy script
This is a creation in Article, where the information may have evolved or changed.
"Editor's word" Jenkins, as the most popular continuous integration tool, in combination with the use of container technology, kubernetes cluster based on how to play a new capacity, in the application of micro-service based on the provision of better CI way, it is worth each of our developers to continue to explore. This share mainly introduces how we use Jenkins Pipeline, container and Kubernetes deployment capabilities, by increasing the use of the text template engine, expand the Kubernetes config capability, Develop CI workflow for product development.

Jenkins and Kubernetes

As the most popular continuous integration tool, Jenkins is the most common CI tool for developers, with a rich user base, strong expandability, and rich plugins. After Jenkins strengthens its pipeline function, it is possible to achieve a variety of complex processes through a rich step library. And with the popularity of Docker, the support for Docker has been added to the implementation of the process within the container.

and Kubernetes as the version of the iteration faster and faster, in the container inside the heat is also more and more high, while the release of each version, the new features are increasing. As the current mainstream container management platform, its powerful capabilities do not need to do more introduction.

Application of container and application of micro-service design Thinking

Containerized does not imply microservices, traditional monomer applications can also be containerized, but it is difficult to enjoy the benefits of containerized. MicroServices are not necessarily containerized, after the application is disassembled as microservices, the same can be done without the use of containers, but through traditional operations to complete the system construction and deployment. With the combination of microservices and containers, we can make full use of the advantages of each side, resulting in elastic scaling, simplified deployment, easy to expand, technology compatibility and so on.

In the process of micro-service splitting for the application, we mainly consider the function point, the control object, the personnel configuration arrangement of the development group, the product roadmap planning and so on. For example, in view of the number of staff in the existing development group, allocation and their respective skills proficiency, you can take into account the number of service modules control, scheduling the Service module development team; for functional points and mid-forward product planning, specific functions can be summed up in a service module, and in the process of version development iterations, By expanding the capabilities of this service module to complete the development of product functions, or temporarily integrate some of the functions in a module, with the function of increasing or iterative development, then further module splitting or dismantling.

For the requirements of module development, because of the use of container technology, for the development of language or specific framework of the selection, can be given to specific module developers. In the team, we do not make mandatory requirements, but to make recommendations, to avoid excessive technical stack, resulting in later maintenance difficulties.

Within our team, we focus on the use of only two back-end development languages, and the corresponding frameworks or major development libraries have a corresponding and clear choice. For the API interface of the module, use rest and provide the API at least according to the maturity Model LEVEL2.

Compilation and unit testing in a container environment

Our entire CI workflow was driven by Jenkins, and Jenkins Pipeline was used. First, pipeline can better combine the stage in the job, reuse the same parts of the module, and gradually increase the expansion stage to achieve more required functionality as the development complexity increases, and second, set the pipeline groovy script source to source code, The process can be controlled according to the source code function point, and version management of the script is also completed.

Because of such a tool, the compilation environment of each module, the unit test environment, is also put into the container. Each module can install its own module operating characteristics or environmental requirements, prepare its own compilation environment, unit test environment, operating environment, therefore, the code base will be retained in the corresponding dockerfile, through different dockerfile to complete the preparation of different environmental images. At the same time, Jenkins can now also use the Docker pipeline plugin to support running step within the container, so the actual compilation and testing process we have done with its functionality is this:
    1. Build the compilation environment image using the dockerfile of the compiled environment.
    2. Using a compiled environment image to start the container and compile within the container, the intermediate artifacts of the finished compilation are also temporarily present in the workspace.
    3. Test the environment image using the Dockerfile component unit of the test environment.
    4. Using a unit test environment image to start a container and run unit tests within a container, the unit test script originates from the code base and also uses the intermediate artifacts generated at compile time.
    5. Use release dockerfile to build the actual release image and upload the image library.


Because the compilation environment and the unit test environment are not frequently changed, you can also extract the compilation environment image preparation and the unit test environment mirroring two steps into the independent CI job, the need to manually trigger.

Service Deployment and upgrade

For the CI process, after compiling and packaging, the service starts and tests are required. We use Kubernetes deployment and service, after each CI process is compiled and packaged, by getting the build number, as the image of the tag, to complete the image upload archive, while using the tag, Modify the deployment already created in the kubernetes, and use the deployment rolling Update to complete the upgrade.

Extensions to kubernetes service templates and service configurations

We found that there were still a lot of inconvenient places in the process of service deployment in the way we actually used the Kubernetes Deployment upgrade. For example, the problem with the same name in Kubernetes, Kubernetes deployment The image tag change problem when upgrading, etc., and so on where the CI process is likely to change. For example, in the case of a deployment with the same name, subsequent deployment will not be created, which results in the need to modify the name if you want to test a version in a way that starts a new deployment, as is the service associated with deployment , after starting a new named deployment, it is also necessary to start with its corresponding service for exposing services.

The image tag modifications required for the deployment upgrade need to be changed each time a new mirror tag is generated for the CI, so each time you need to modify the mirror tag in the corresponding Yaml file, modify the values generated in the actual CI process, and then use the upgrade feature to complete the service upgrade.

For these issues, we used a set of text template engine, the deployment or upgrade of the Yaml file itself is written as a template, there may be changes or need to be based on the CI process changes in location, using template identity placeholder, and the specific template data content, or through the CI process Jenkins to obtain, Either read with a specific configuration file or get it from a specific input parameter;

At the same time, the content of the template data will be set to the system as kubernetes when actually deployed, so the content of the data can also be used in the environment variable or startup command by Kubernetes using Configmap method. After merging templates and data through the text template engine, the generated Yaml file is then used as the content for subsequent kubernetes operations.

By using this approach, we separate the content that needs to be deployed into templates and configurations, and templates typically remain unchanged in the service architecture, using the image name, startup mode, or configuration parameters without major changes, while using flexible use of different configurations to complete service upgrades or pull up new deployments, to accomplish different data storage points, Complete the modifications to the internal configuration of each module.

By using this method, our modifiable content, from the Configmap itself can only be covered by the environment variable or start command this block, extended to the startup name, Label, image and other Yaml files within the various fields, to solve the same name, mirror modification, Label additions or changes and other problems encountered when using Kubernetes.

Automated testing

You can pull automated tests after you have finished compiling the package and service upgrade deployment through Jenkins. Test framework we chose to use the robotframework. The test script obtains the specific exposed port of the service through the Kubernetes service, and then executes the test against the API in turn, based on the test script.

The source of the test script, in part from the module code base, is the API test submitted by each module developer for its own module, and partly by the testers to complete the cross-module tests that were submitted. For automated testing this piece, our completion degree is not very high, just to build up a basic operating framework, can be connected with the entire process.

Release release

Since the developed product itself is made up of several mirrors, the release of the product can be attributed to the publication of the image. After the test pass, can simply take advantage of the mirror copy ability, the test through the relevant image of the version, through the replication between the mirror library, the internal image library used for development testing, copied to the external publication Image Library, you can complete the release of the version, but also through the replication of the tag control, published as the specified version number.

Summarize

As described above, we illustrate the practice of establishing a CI process in our own development process. We do not have much to do with the process, and there is not much to highlight about the use of the tools, but we have established a CI flow that is adapted to the development process only according to our own needs. This article introduces the establishment of our CI process, and it is also an opportunity for us to learn from more places and to communicate with you.

Q&a

Q: Can I share a few plugins about Jenkins and Docker integration?

A:docker Pipeline, Docker Plugin, docker-build-step these plug-ins.
Q: What is the general idea of a PTZ mirror copy? At present we are testing, pre-release, publish share a warehouse. Implemented through the auxiliary module tag.

A: Mirror copy function, simple to achieve a different project image clone, according to our image warehouse design, the different stages (development, testing, can be online) of the image of different projects classification, based on the mirror copy function can quickly achieve different stages of product release, that is, image publishing, This feature can be downloaded Apphouse version for trial.
Q: Is your internal warehouse and external warehouse image synchronized in real time? Are your profiles implemented through Configuration Center management or environment variables between mirrors?

A: Not in real-time synchronization, but through the image of our company Image Library product replication capabilities to achieve. The product run configuration in the current development process is implemented through its own enhanced profile capabilities, and the Yaml files that are configured to modify the application deployment are also born as Configmap.
Q: Is there a simple sample that can be followed by practice?

A: Based on the kubernetes CICD products will be released, the corresponding demo platform is available, please pay attention to, thank you!
q:kubernetes How do I replace a placeholder for a template for a YAML deployment file? How does the old version of the container handle?

A: Using a set of self-developed text template engine, in fact, similar to the template engine in the Web framework, the completion of the template and configuration of the merge, by using the configuration of Key-value, replace the template key placeholder. In addition, due to the use of the Kubernetes deployment rolling update, the old version of the container/pod will be deleted by Kubernetes itself after the upgrade is complete.
Q: How can traditional monomer applications be containerized and implemented in stages?

A: Yes, containerized transformation or micro-service transformation There are many implementation methods, such as the gradual reconstruction of disassembly, or new modules for microservices and containerized, or to develop new modules to replace the original application function points, and so on. Each team can choose the right process for their transformation.
Q: Can the database be containerized?

A: You can containerized a database by hooking it into a container, but it is still rare in real-world projects.
Q: There is a scenario where two services have dependencies, and service a relies on service B, how to guarantee the boot order of services A and B?

A: A good design is to enable the A service to complete its own detection and invocation of the B service, rather than relying strongly on its boot order.
q:kubernetes The template and configuration of the service, how does this template come from, is the user orchestration? Or are you prepared in advance? How is the configuration data stored?

A: Because the current template and configuration are only used to launch our own developed applications, this template is for our own application. Configuration data is stored as a file, but you can also accept parameters as configurations when using the text engine for templates and configuration merging.
Q: What is CI and CD, does this make sense?

A:CI is more biased toward application compilation, code checking, unit testing, etc., CD is biased towards application deployment, running process. Our development process, after compiling the package, will actually run the application for testing, or it can be considered a CD for testing.
Q: What are the main benefits of using a container to package the Jenkins process?

A: Because different programs have different dependencies on the compilation environment, the original use of the Jenkins method is to complete the environment preparation on Jenkins node, and now the container can be used to prepare for the environment, and the dependency on Jenkins node can be further reduced. At the same time, the environment changes can be controlled by the developers themselves.
Q: Is the multi-compilation environment a different image? How do I handle the problem of pipeline processing the compilation environment?

A: Yes. Because our own product development each module has each module development language and the frame, therefore each module must maintain own compilation environment image. When compiling with pipeline, you use the compilation environment in a way that uses mirroring to run the container and then compile it inside the container.
Q: Is Jenkins also deployed in Docker? If Jenkins is inside Docker, how do I use Docker to perform CI in Docker?

A: Yes, we're also groping to put Jenkins itself in a container to run. In this case, the Jenkins container uses root permissions to mount the Docker.sock and Docker data directories into the container.
Q: Will it take a long time for the entire process to be built using pipeline to build the image of the environment before compiling it? Is there an optimization plan?

A: The build image does not change very often, so the building of this image is usually done directly with the cache, and we also take the process of compiling the environment image package as a separate job, which eliminates the need to build the compilation environment in the actual compilation process.
How are the users of Q:jenkins and Kubernetes managed? My expectation is that users can only see their own resources, other users are not authorized.

A: We only use these two tools, in the development environment is not external, all do not exist user management issues. In the CICD product that our company is developing, we have a design based on our own understanding of this piece.
How is the continuous integration of q:jenkins implemented? For example, different repositories of the submission trigger, such as GitHub, Gitlab version number how to control?

A:jenkins CI process trigger can be many kinds, code commit trigger, timed trigger, manual trigger. Version number control can also have many scenarios, such as using the job's number, using Git's commit number, using timestamps and so on.
Q: After containerized release also to through Jenkins, feel Docker release not Jenkins convenient, in addition to containerized portability, what reason is worth advancing the project containerized?

A: The application of containerized, in fact, more is to value the application on the container management platform after the ability to gain, such as the level of expansion after running on Kubernetes, service discovery, rolling upgrades, and so on.
q:kubernetes Update needs to develop a new image to make a roll-up update (upgrade), if only updated Configmap, is there a way to do the roll-up update?

A: After the completion of our CI process, the image tag of each module will change, we use the specific generated tag generation configuration, and then the deployed Yaml file is written as a template, the specific tag of the image will be combined into different yaml files according to the configuration generated by each CI process. Then use the combined Yaml, i.e. tag has changed to the latest version of the Yaml file for the application of the rolling upgrade.
How is q:pipeline implemented using the scheme built in the mirror? With a Jenkins ready-made plugin or with a groovy script?

A: The Docker plugin used by Jenkins is used in the same way that it is written in groovy script, for example: stage('Build'){ docker.image('golang:1.7').inside { sh './script/build.sh' } } .
The above content is organized according to the February 28, 2017 night group sharing content. Share people Huang Wenjun, a senior system architect with cloud capacity. Mainly responsible for container cloud Platform product architecture and design, 8 years of work experience, with enterprise-class storage, cloud computing solutions related understanding. Focus on micro-service design thinking, development process optimization, Docker and kubernetes technology in real-world applications. Dockone Weekly will organize the technology to share, welcome interested students add: Liyingjiesz, into group participation, you want to listen to the topic or want to share the topic can give us a message.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.