Preliminary Jenkins use Kubernetes Plugin to complete the continuous construction and release

Source: Internet
Author: User
Tags uuid git clone docker ps jenkins ci k8s

Directory

1, Jenkins CI/CD background 2, environment, software preparation 3, deployment Jenkins Server to Kubernetes 4, Jenkins configuration kubernetes Plugin 5, testing and Validation 5.1, pipeline type support 5 .2, Container Group type Support 5.3, non-Pipeline type support 5.4, configuring custom Jenkins-slave Mirroring

1, Jenkins ci/cd background Introduction

Continuous construction and release is an essential step in our daily work, most companies are currently using Jenkins cluster to build a CI/CD process in line with the requirements, but the traditional Jenkins Slave a main way there will be some pain points, such as: The Master Master in a single point of failure, The whole process is not available; each Slave configuration environment, to complete the compilation of different languages such as packaging operations, but these differentiated configuration led to management is very inconvenient, maintenance is also more laborious; Resource allocation is uneven, some Slave to run the job appears in line, and some The Slave is idle, the last resource is wasted, each Slave may be a physical machine or VM, and when Slave is idle, the resource is not completely released.

Because of all these pain points, we are eager for a more efficient and reliable way to complete this ci/cd process, and Docker virtualization container technology can be a good solution to this pain point, the following figure is based on kubernetes to build Jenkins cluster of simple sketch.

You can see from the diagram that Jenkins master and Jenkins Slave run on node in the Container cluster as Docker kubernetes, master runs on one of the nodes, and stores its configuration data to a Volum e up, Slave running on each node, and it is not always running, it will be dynamically created and deleted according to the requirements.

The workflow for this approach is roughly: when Jenkins Master accepts a build request, it dynamically creates a Jenkins Slave running in Docker Container based on the configured Label and registers it with Master when it runs out of Jo After B, this Slave will be logged off and Docker Container will automatically be deleted and restored to its original state.

The benefits of this approach are many: services are highly available , and when Jenkins master fails, Kubernetes automatically creates a new Jenkins master container and assigns Volume to the newly created container to ensure that the data is not lost. So as to achieve high availability of cluster services. dynamic scaling, reasonable use of resources , every time the Job is run, automatically create a Jenkins slave,job after completion, Slave automatically log off and delete the container, the resources automatically released, and Kubernetes will be based on the use of each resource, dynamically allocate Slave to the idle node to create, reduce the occurrence of a node because of the high utilization of resources, but also queued up for the situation in the node. scalability is good , when the Kubernetes cluster's resources are severely inadequate and cause the Job to queue up, it is easy to add a kubernetes Node to the cluster, so as to achieve expansion. 2, environment, software preparation

This demo environment, I am operating on the native MAC OS and virtual machine Linux Centos7, the following is installed software and version: Docker:version 17.09.0-ce Oracle virtualbox:version 5.1.20 r114 628 (Qt5.6.2) minikube:version v0.22.2 kuberctl:
Client version:v1.8.1 Server version:v1.7.5

Note: The Minikube boot k8s node instance is required to run in the VM virtual machine in this machine, so the VM needs to be installed in advance, where I choose Oracle VirtualBox. K8s run the bottom of the use of Docker containers, so the machine needs to install a good Docker environment, Minikube and KUBERCTL installation process can refer to the previous article Minikube local deployment run Kubernetes instance. 3. Deploy Jenkins Server to Kubernetes

Before we perform the deployment, we want to make sure that the minikube is working properly, and that if you use the Kubernetes cluster that is already built, make sure it works. Next, we need to prepare to deploy the Jenkins Yaml file, you can refer to
GitHub Jenkinsci Kubernetes-plugin official website to provide JENKINS.YAML and SERVICE-ACCOUNT.YAML documents, here official website uses is the comparison standard Statefulset (stateful Cluster service) is deployed, and ingress and RBAC account permission information are configured. But when I was testing this machine, I found Volume mount failed and the log display did not have permission to create the directory. So I streamlined it, and I wrote a configuration file for the deployment way and Service (here to steal a lazy, not using the RBAC authentication).

$ cat Jenkins-deployment.yaml apiversion:apps/v1beta1 kind:deployment Metadata:name:jenkin s labels:k8s-app:jenkins spec:replicas:1 selector:matchlabels:k8s-app:jenkins template:m
        Etadata:labels:k8s-app:jenkins spec:containers:-Name:jenkins Image:jenkins Imagepullpolicy:ifnotpresent volumemounts:-name:jenkins-home Mountpath:/var/jenkins_ Home ports:-containerport:8080 name:web-containerport:50000 Name:agen T volumes:-Name:jenkins-home emptydir: {} 
$ cat jenkins-service.yml
kind:service
apiversion:v1
metadata:
  Labels:
    k8s-app:jenkins
  Name:jenkins
Spec:
  type:nodeport
  ports:
    -port:8080
      name:web
      targetport:8080
    - port:50000
      name:agent
      targetport:50000
  selector:
    k8s-app:jenkins

Description: We've lost our Service here. Ports 8080 and 50000,8080 are the default ports for accessing Jenkins Server page ports, 50000 for creating Jenkins Slave connecting with Master, and if not exposed, Slave cannot establish a connection with Master. Here, the Nodeport mode is used to leak the port, not specifying its port number, which is assigned by default by the Kubernetes system, and you can also specify a distinct port number (range in 30000~32767).

Next, create the Jenkins Service by executing the kubectl command line.

$ kubectl Create namespace Kubernetes-plugin
$ kubectl config set-context $ (kubectl config current-context)--namesp Ace=kubernetes-plugin
$ kubectl create-f jenkins-deployment.yaml
$ kubectl create-f jenkins-service.yml

Description: Here we create a new namespace for Kubernetes-plugin, and set the current context to kubernetes-plugin namespace so that it automatically switches to the space to facilitate subsequent command operations.

$ kubectl Get service,deployment,pod
NAME      TYPE       cluster-ip   external-ip   PORT (S)
Age Jenkins   nodeport   10.0.0.204   <none>        8080:30645/tcp,50000:31981/tcp   1m

NAME      desired   Current   up-to-date   AVAILABLE   age
Jenkins   1         1         1            1           1m

NAME                      READY     STATUS    Restarts   age
jenkins-960997836-fff2q   1/1       Running   0          1m

At this point, we will find that the Jenkins Master service has been started, and the port has been burst into 8080:30645,50000:31981, at which point the http://<Cluster_IP>:30645 access can be opened through the browser Jenkins page. Of course, can also through the Minikube service ... command to automatically open the page.

$ minikube Service jenkins-n kubernetes-plugin
Opening kubernetes service kubernetes-plugin/jenkins in default brows
er ... Opening kubernetes service kubernetes-plugin/jenkins in default browser ...

Completes the Jenkins initialization plug-in installation process on the browser, and configures the administrator account information, here ignores the process, after initialization completes the interface as follows:

Note: During initialization, let the input/var/jenkins_home/secret/initialadminpassword the initial password, because we set the Emptydir: {} is not mounted to the external path, you can enter into the container to get inside.

$ kubectl exec-it jenkins-960997836-fff2q Cat/var/jenkins_home/secrets/initialadminpassword
4, Jenkins configuration kubernetes Plugin

Admin Account Login Jenkins Master page, click the "System Management"-> "Management plug-in"-> "optional Plug-ins"-> "kubernetes plugin" Check installation can be.

After installation, click "System Management"-> "System Setup"-> "Add a Cloud"-> select "Kubernetes" and fill in Kubernetes and Jenkins configuration information.

Description: The name of the default is Kubernetes, can also be modified to other names, if modified here, the bottom in the execution of the Job specified podtemplate () parameter cloud for its corresponding name, otherwise will not be found, cloud default value: Kubernetes Ku Bernetes URL where I filled out the https://kubernetes.default here I filled out the DNS records for the Kubernetes service, which can be resolved into the Cluster IP of the service, note Meaning: You can also fill in https://kubernetes.default.svc.cluster.local full DNS records, because it should conform to <SVC_NAME>.<NAMESPACE_NAME> Svc.cluster.local the name of the way, or directly fill out the external kubernetes address https://<clusterip>:<ports>. Jenkins URL I filled in the http://jenkins.kubernetes-plugin:8080, similar to the above, but also using the Jenkins service corresponding DNS records, but to specify 8080 ports, because we set up a burst of 808 0 port. At the same time can also use http://<clusterip>:<node_port> way, for example, I can fill in http://192.168.99.100:30645 here is no problem, here 30645 is external leakage of Nodeport.

Configuration complete, you can click the "Test Connection" button to test whether the connection to the Kubernetes, if the display Connection test successful Indicates the connection is successful, configuration is not a problem. 5. Test and verify

Well, with the Kubernetes installation Jenkins Master completed and the connection has been configured, we can then configure the Job to test whether a Jenkins Sl running in Docker Container will be dynamically created based on the configured Label. Ave and register on Master, and after running the Job, Slave will be logged out and Docker Container will be deleted automatically. 5.1, Pipeline type support

Create a Pipeline type Job and name it my-k8s-jenkins-pipeline, and then fill out a simple test script at the Pipeline script as follows:

def label = "Mypod-${uuid.randomuuid (). toString ()}"
Podtemplate (Label:label, Cloud: ' Kubernetes ') {
    node ( Label) {
        stage (' Run shell ') {
            sh ' sleep 130s '
            sh ' echo Hello World '
        }
}}

Execute the build, at this time to build the queue inside, you can see that there is a build task, not yet implemented in the build, because has not been initialized well, wait a moment, you will see Master and JENKINS-SLAVE-JBS4Z-XS2R8 have been created, in a moment, you will find JENKINS-SLAVE-JBS4Z-XS2R8 has been registered in Master and started executing the JOB, click on the slave node and we can see through the tag Mypod-b538c04c-7c19-4b98-88f6-9e5bca6fc9ba Association, the label is our definition of the tag format generated, after the Job execution, Jenkins-slave will automatically log off, we through the KUBECTL command line, we can see the entire process of automatic creation and deletion.

# jenkins slave before boot, only the Jenkins Master service exists $ kubectl get pods NAME READY STATUS restarts age                        jenkins-960997836-fff2q 1/1 Running 0 1d # Jenkins slave automatically created $ kubectl get pods NAME READY STATUS restarts age jenkins-960997836-fff2q 1/1 Running 0 1d jenkins-slave-        Jbs4z-xs2r8 1/1 Running 0 56s # docker Container Boot Service status $ docker PS |grep Jenkins   Jenkins/jnlp-slave "Jenkins-slave bd880 ..." About a minute ago up about a minute k8s_jnlp_jenkins-slave-jbs4z-xs2r8_kubernetes-plugin_25a91ed9 -3337-11e8-a49f-08002744a3f1_0 d64deb0eaa20 gcr.io/google_containers/pause-amd64:3.0 "/pause" Ab Out a minute ago up about a minute k8s_pod_jenkins-slave-jbs4z-xs2r8_kubernetes-plugin_25a91ed9-33                      37-11e8-a49f-08002744a3f1_0 995c1743552a Jenkins               "/bin/tini--/usr/l ..." Hours ago Up hours k8s_jenkins_jenkins-960997836-fff2q_kubernetes-plugin_27b5c7b 2-3256-11e8-a49f-08002744a3f1_0 024d43257e9d gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 7 hours ago Up hours k8s_pod_jenkins-960997836-fff2q_kubernetes-plugin_27b5c7b2-325        6-11E8-A49F-08002744A3F1_0 # jenkins Slave executed automatically delete $ kubectl get pods NAME READY STATUS Restarts age jenkins-960997836-fff2q 1/1 Running 0 1d jenkins-slave-jbs4z-xs2r8 0/1 T Erminating 0 2m $ kubectl get pods NAME READY STATUS restarts age jenkins-9609978                                     1/1 Running 0 1d $ docker ps |grep Jenkins 995c1743552a 36-fff2q Jenkins   "/bin/tini--/usr/l ..." Hours ago Up hours K8s_jenkIns_jenkins-960997836-fff2q_kubernetes-plugin_27b5c7b2-3256-11e8-a49f-08002744a3f1_0 024d43257e9d Gcr.io/google_ containers/pause-amd64:3.0 "/pause" hours ago up hours k8s_pod _jenkins-960997836-fff2q_kubernetes-plugin_27b5c7b2-3256-11e8-a49f-08002744a3f1_0

From the operation log above, we can clearly see the Jenkins Slave automatically created to cancel the deletion of the process, the whole process is automated, do not require manual intervention. 5.2. Container Group Type Support

Create a Pipeline type Job and name it My-k8s-jenkins-container, and then fill out a simple test script at the Pipeline script as follows:

def label = "Mypod-${uuid.randomuuid (). toString ()}"
Podtemplate (Label:label, Cloud: ' Kubernetes ', containers: [
    Containertemplate (name: ' maven ', Image: ' Maven:3.3.9-jdk-8-alpine ', ttyenabled:true, command: ' Cat '),
  ] {

    Node (label) {
        stage (' Get a Maven Project ') {
            git ' https://github.com/jenkinsci/kubernetes-plugin.git '
            Container (' maven ') {
                stage (' Build a Maven project ') {
                    sh ' mvn-b clean install '
                }
        }}
}

Note: Here we use the containers to define a containertemplate template that specifies the name maven and the Image used, and the bottom when executing Stage, use container (' maven ') {...} to specify that the Inside the template to perform the related operation. For example, the example performs a git clone operation in Jenkins-slave and then goes into the MAVEN container to perform the Mvn-b clean install compilation operation. The advantage of this kind of operation is that we only need to make the corresponding compilation environment mirror according to the code type separately, by specifying different container to complete the corresponding code type compilation operation separately. The detailed configuration of each parameter of the template can refer to Pod and container template configuration.

The build, similar to the top Pipeline, also creates a new jenkins-slave and registers it with Master, and, in contrast, it launches our configured MAVEN container template in Kubernetes to execute the relevant commands.

$ kubectl Get pods NAME READY STATUS restarts age jenkins-960997836-fff2q 1/1 Ru                                        Nning 0 1d jenkins-slave-k2wwq-4l66k 2/2 Running 0 53s $ docker ps CONTAINER ID IMAGE               COMMAND CREATED STATUS PORTS   NAMES 8ed81ee3aad4 jenkins/jnlp-slave "Jenkins-slave 4ae74 ..." About a minute ago up about a minute k8s_jnlp_jenkins-slave-k2wwq-4l66k_kubernetes-plugin_90c2ee92 -33ca-11e8-a49f-08002744a3f1_0 bd252f7e59c2 maven "Cat" Ab Out a minute ago up about a minute k8s_maven_jenkins-slave-k2wwq-4l66k_kubernetes-plugin_90c2ee92- 33ca-11e8-a49f-08002744a3f1_0 fe22da050a53 gcr.io/google_containers/pause-amd64:3.0 "/pause" Abo             UT a minute ago up about a minute          K8s_pod_jenkins-slave-k2wwq-4l66k_kubernetes-plugin_90c2ee92-33ca-11e8-a49f-08002744a3f1_0 995c1743552a J   Enkins "/bin/tini-/usr/l ..." Hours ago Up hours k8s_jenkins_jenkins-960997836-fff2q_kubernetes-plugin_27b5c7b 2-3256-11e8-a49f-08002744a3f1_0 024d43257e9d gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 4 hours ago Up hours k8s_pod_jenkins-960997836-fff2q_kubernetes-plugin_27b5c7b2-325 6-11e8-a49f-08002744a3f1_0
5.3. Non-Pipeline type support

In addition to running jobs in Jenkins, we usually use the normal type job, and if we want to use Kubernetes plugin to build the task, we need to click on "System Management"-> "System Settings"-> "Cloud"-& Gt "Kubernetes"-> "Add pod Template" To configure "Kubernetes pod Template" information.

Note: The Labels name is used to specify the node that the task runs when configuring a pipeline type Job. Containers the name of the field, note that if name is configured as JNLP, Kubernetes will replace the default Jenkinsci/jnlp-slave image with the Docker image specified below, otherwise Kube Rnetes plugin will still establish a connection with the Jenkins Server with the default Jenkinsci/jnlp-slave image, even if we specify another docker image. Here I am casually configured for Jnlp-slave, which means using the default Jenkinsci/jnlp-slave mirror to run, because we have not yet made a mirror that can override the default mirror.

Create a new free-style Job name called My-k8s-jenkins-simple, configure "Restrict where this project can be run" tick, and then output us above "Label Expression" Creating a template means The Labels name jnlp-agent specifies that the Job matches the jnlp-agent tag's Slave run.

After the execution of the build, the same as the top Pipeline, in line with our expectations.

5.4, configure the custom Jenkins-slave mirror image

The mirrored jenkinsci/jnlp-slave provided by the Kubernetest plugin can perform some basic operations, which are extended based on openjdk:8-jdk mirroring, but for us this mirroring function is too simple, such as we want to perform Maven compile or other commands, there is a problem, then you can create their own image to preinstall some software, not only to achieve jenkins-slave function, but also to complete their personalized needs, it is quite good. It would be a bit of a hassle if we started making mirrors from scratch, but we can refer to the official mirrors of Jenkinsci/jnlp-slave and Jenkinsci/docker-slave, noting that Jenkinsci/jnlp-slave mirroring is based on Jenkinsci/docker-slave to do. Here I simply demonstrate, based on jenkinsci/jnlp-slave:latest mirroring, expand on its basis, install Maven into the mirror, and then run the verification is feasible.

Create a Pipeline type Job and name it my-k8s-jenkins-container-custom, and then fill out a simple test script at the Pipeline script as follows:

def label = "Mypod-${uuid.randomuuid (). toString ()}"
Podtemplate (Label:label, Cloud: ' Kubernetes ', containers: [
    Containertemplate (
        name: ' JNLP ', 
        Image: ' Huwanyang168/jenkins-slave-maven:latest ', 
        alwayspullimage: False, 
        args: ' ${computer.jnlpmac} ${computer.name} ')
  {

    node (label) {
        stage (' Stage1 ') {
            Stage (' Show Maven version ') {
                sh ' mvn-version '
                sh ' sleep 60s '
            }
    }}

Description: Here the Containertemplate Name property must be called Jnlp,kubernetes to replace the default Jenkinsci/jnlp-slave mirror with the mirror specified by the custom images. In addition, the args parameter passes two parameters that the Jenkins-slave operation requires. There is also a point where you do not need to specify container (' JNLP ') {...} because it is kubernetes specifies the container to be executed, so direct execution of Stage is possible.

Execute the build and see how it works.

$ kubectl get pods NAME READY STATUS restarts age jenkins-9609978 36-fff2q 1/1 Running 0 2d jenkins-slave-9wtkt-d2ms8 1/1 Running 0 12m bj-m-204072a : K8s-gitlab wanyang3$ Docker PS CONTAINER ID IMAGE COMMAND C            reated STATUS PORTS NAMES b31be1de9563 Huwanyang168/jenkins-slave-maven 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.