16 Using Kubernetes to Compile Jenkins Slave Nodes for Continuous Delivery Projects

16Using Kubernetes to Compile Jenkins Slave Nodes for Continuous Delivery Projects #

In the previous section, we briefly introduced the basic syntax and usage examples of using the Kubernetes plugin to dynamically generate slave nodes. In this section, we will provide a basic practice based on the syntax knowledge introduced in the previous section and continue to provide some configuration details when using this plugin.

For ease of understanding, this section will be divided into multiple versions to gradually integrate the previous pipeline syntax.

Basic Version #

Version Notes:

  1. The PodTemplate configuration is placed in the pipeline script using the pipeline script approach and is not defined in the Jenkins system configuration.

  2. The default Jenkins slave image is jnlp-slave:3.35-5-alpine, which is a Jnlp-based agent image. This image does not include code pulling/compilation or Docker commands, so a new image needs to be created based on this image.

  3. The Dockerfile for building the application image is referenced using nfs.

  4. A simple continuous delivery script is implemented using basic shell commands for ease of understanding the continuous delivery process.

Based on the above conditions, the pipeline script written in script syntax is as follows:

podTemplate(cloud: 'kubernetes', namespace: 'default', label: 'pre-CICD',
  serviceAccount: 'default', containers: [
  containerTemplate(
      name: 'jnlp',
      image: "192.168.176.155/library/jenkins-slave:latest",
      args: '${computer.jnlpmac} ${computer.name}',
      ttyEnabled: true,
      privileged: true,
      alwaysPullImage: false,
    ),
  ],
  volumes: [
        nfsVolume(mountPath: '/tmp', serverAddress: '192.168.177.43', serverPath: '/data/nfs', readOnly: true),
        hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),   
  ],
){
  node('pre-CICD') {
    stage('build') {

        container('jnlp') {
            stage('git-clone') {
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
            }

            stage('Build a Maven project') {
                sh 'cd base-nop/fw-base-nop && mvn clean install -DskipTests -Denv=beta'
            }

            stage('build docker image'){
                sh 'cp /tmp/Dockerfile base-nop/fw-base-nop/target/'
                sh '/usr/bin/docker build -t 192.168.176.155/library/fw-base-nop:xxxx --build-arg jar_name="fw-base-nop.jar" base-nop/fw-base-nop/target/'
            }    

            stage('push registry') {
                sh '''
                    docker login -u admin -p da88e43d88722c2c9ca09da644eeb015 192.168.176.155
                    docker push  192.168.176.155/library/fw-base-nop:xxxx
                    docker rmi 192.168.176.155/library/fw-base-nop:xxxx
                '''
            }
        }

    }
 }
}

Explanation:

PodTemplate(…): The configuration for the Pod template is explained in the syntax section, so we won’t go into detail here. However, there is one thing to note: the setting of the name parameter and image parameter under containerTemplate.

There are generally two types of Jenkins agents: one is SSH-based, where the master actively connects to the slave/Agent node; the other is JNLP-based, which uses the HTTP protocol and the Agent/slave node actively connects to the master node. Each Agent requires a unique secret. You can refer to the official instructions.

  1. If the value of the name parameter is set to jnlp, it means that the Jnlp-based jnlp-agent will be used to start the connection to the Jenkins master. This requires ensuring that the specified image contains the command to start the jnlp-agent (the command provided by the official image is jenkins-slave (old version) or jenkins-agent (new version)), and the startup arguments ${computer.jnlpmac} ${computer.name} need to be specified. If the image does not have these commands or startup arguments, the dynamic pod generation will fail.

  2. If the value of the name parameter is not jnlp, and the image parameter specifies a jnlp agent image, the dynamic pod will also fail to generate. This is because when the value set by this parameter is not jnlp, the dynamically generated pod will default to start two containers: one with the default jnlp-agent image for connecting to the Jenkins master, which is jnlp-slave:3.35-5-alpine; the other is the custom image specified by the image parameter. Both these jnlp-agent images will attempt to connect to the Jenkins master when started. When using an agent based on the JNLP method, each client can only have one agent and one secret, so starting multiple jnlp-agent instances will cause the Pod to fail to start.

  3. If the value of the name parameter is not jnlp and the image parameter specifies a non-jnlp agent image (note that the startup arguments should be removed at this time), the dynamically generated pod will contain two containers, one with the name jnlp (which starts in the background based on the default jnlp agent image) and one with the name set by the name parameter. Both containers can be used as the execution environment for the pipeline script and can be referenced using container('container_name').

  4. If there are multiple containers in the Pod and at least one of them is a jnlp container, there are no restrictions on the names and images of the other containers. This means that the container name does not have to be jnlp, but the image can be a jnlp agent image.

  5. In the example above, the image specified by the image parameter is a custom image based on the default jnlp agent image. The Dockerfile is as follows:

FROM jenkins/jnlp-slave:3.35-5-alpine

USER root

RUN apk add maven git

COPY settings.xml /usr/share/java/maven-3/conf/settings.xml
COPY docker/docker /usr/bin/docker

The settings.xml file is the Maven configuration file, which needs to be considered based on the actual situation.

The execution result is as follows:

Declarative Script #

For the corresponding pipeline script, the declarative syntax is as follows:

pipeline {
  agent {
    kubernetes {
      yaml """
apiVersion: v1
kind: Pod
metadata:
  labels:
    some-label: some-label-value
  namespace: 'default'
spec:
  containers:
  - name: jnlp
    image: 192.168.176.155/library/jenkins-slave:latest
    args: ['\$(JENKINS_SECRET)', '\$(JENKINS_NAME)']
    tty: true
    privileged: true
    alwaysPullImage: false
    volumeMounts:
    - name: mount-nfs
      mountPath: /tmp
    - name: mount-docker
      mountPath: /var/run/docker.sock
  volumes: 
  - name: mount-nfs
    nfs:
      path: /data/nfs
      server: 192.168.177.43
  - name: mount-docker
    hostPath:
      path: /var/run/docker.sock
"""
    }
  }
  stages {
    stage('Run maven') {
      steps {
        container('jnlp') {
          checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
        }
      }
    }
    stage('build code') {
      steps {
        container('jnlp') {
          sh 'cd base-nop/fw-base-nop && mvn clean install -DskipTests -Denv=beta'
        }
      }
    }

    stage('build image') {
      steps {
        container('jnlp') {
          sh 'cp /tmp/Dockerfile base-nop/fw-base-nop/target/'
          sh '/usr/bin/docker build -t 192.168.176.155/library/fw-base-nop:xxxx --build-arg jar_name="fw-base-nop.jar" base-nop/fw-base-nop/target/'
        }
      }
    }

    stage('push to registry') {
      steps {
        container('jnlp') {
          sh '''
            docker login -u admin -p da88e43d88722c2c9ca09da644eeb015 192.168.176.155
            docker push  192.168.176.155/library/fw-base-nop:xxxx
            docker rmi 192.168.176.155/library/fw-base-nop:xxxx
          '''
        }
      } 
    }
  }
}

Note:

Declarative syntax for pipelines requires strict formatting, so be patient if any syntax errors occur.

When using the Kubernetes plugin, script-based pipeline syntax is relatively simpler than declarative syntax, so it is recommended to use script-based syntax.

Config File Provider Plugin #

There is another way to use the Maven configuration file settings.xml, which is through the Config File Provider Plugin in Jenkins. The purpose of this plugin is to store files with properties, xml, json, Groovy extensions, as well as the contents of Maven settings.xml within Jenkins. Let’s see how to use this plugin.

Click on “Manage Jenkins” -> “Managed files” -> “Add a new Config” -> “Global Maven settings.xml”, and paste the content of the settings.xml file into the “Content” input box, as shown below:

After editing, submit the changes.

Next, use the snippet generator configfileProvider:... to generate a snippet based on the configured maven settings.xml file, for example:

After configuring, you can edit the pipeline script. To test the functionality of this plugin, I will directly use the official Maven image from Docker Hub. Here is the script:

podTemplate(cloud: 'kubernetes', label: 'pre-CICD', containers: [
  containerTemplate(
      name: 'maven',
      image: "maven:latest",
      command: 'cat',
      ttyEnabled: true,
      privileged: true,
      alwaysPullImage: false)
  ],
  volumes: [
        nfsVolume(mountPath: '/tmp', serverAddress: '192.168.177.43', serverPath: '/data/nfs', readOnly: false),
        hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
  ],
){
  node('pre-CICD') {

    stage('build') {
        container('maven') {
            stage('clone code') {
                sh "git clone http://root:[[email protected]](/cdn-cgi/l/email-protection)/root/base-nop.git"
            }
            configFileProvider([configFile(fileId: 'f67fdaf1-4b17-4caa-86ad-e841f387ac7a', targetLocation: 'settings.xml')]) {
                stage('Build project') {
                    sh 'cd base-nop/fw-base-nop && mvn clean install -DskipTests -Denv=beta --settings ${WORKSPACE}/settings.xml'
                }
            }

        }
    }

 }

}

Explanation:

When compiling the code with the mvn command, the --settings parameter is used to specify the Maven settings file. In the above example, you can see that the job’s workspace path is added when specifying the settings.xml file. This is because when using the configFileProvider plugin, by default, the plugin will copy the specified settings.xml file to the job’s workspace directory. If you don’t add the path, it may result in a file not found error.

The basic version of the continuous delivery script is complete. This version is meant to familiarize everyone with the basic process of code compilation and image building. The next version will include some simple optimizations. Let’s continue reading.

Advanced Version #

Explanation (relative to the previous version):

  1. In this version, we introduce variables to improve flexibility. Note that double quotes ("") should be used when referencing variables.
  2. Some internal commands in the script are implemented using the Kubernetes plugin and Docker plugin’s relevant methods.
  3. Mounting shared directories to improve code compilation efficiency, specifically the .m2 repository for Maven.

First, use the snippet generator to generate the syntax for some command snippets:

checkout: Check out from version control: Generates the snippet for pulling code, which has been explained before and will not be further elaborated.

Use the withDockerRegistry snippet generator from the docker pipeline plugin to generate the syntax for authenticating with the Docker Registry, as shown below:

Here is the pipeline script:

def project_name = 'fw-base-nop'
def registry_url = '192.168.176.155'

podTemplate(cloud: 'kubernetes', namespace: 'default', label: 'pre-CICD',
 , containers: [
  containerTemplate(
      name: 'jnlp',
      image: "192.168.176.155/library/jenkins-slave:latest",
      args: '${computer.jnlpmac} ${computer.name}',
      ttyEnabled: true,
      privileged: true,
      alwaysPullImage: false,
    ),
  ],
  volumes: [
       nfsVolume(mountPath: '/tmp', serverAddress: '192.168.177.43', serverPath: '/data/nfs', readOnly: false),
       nfsVolume(mountPath: '/root/.m2', serverAddress: '192.168.177.43', serverPath: '/data/nfs/.m2', readOnly: false),
       hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),   
  ],
    nodeSelector: 'kubernetes.io/hostname=192.168.176.160',
){
  node('pre-CICD') {
        stage('build') {
            container('jnlp') {
                stage('clone code') {
                    checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])

```groovy
script {
    imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
}
echo "${imageTag}"
}

stage('Build a Maven project') {
sh "cd ${project_name} && mvn clean install -DskipTests -Denv=beta"
}

withDockerRegistry(credentialsId: 'auth_harbor', url: 'http://192.168.176.155') {
stage('build and push docker image') {
sh "cp /tmp/Dockerfile ${project_name}/target/"
def customImage = docker.build("${registry_url}/library/${project_name}-${BUILD_NUMBER}:${imageTag}","--build-arg jar_name=${project_name}.jar ${project_name}/target/")
echo "Pushing image"
customImage.push()
}
stage('delete image') {
echo "Deleting local image"
sh "docker rmi -f ${registry_url}/library/${project_name}-${BUILD_NUMBER}:${imageTag}"
}
}
}
}

The agent will automatically start when the job is built, as shown below:

Note that if you use this method, the code in the pipeline is not suitable for this job. You need to implement continuous delivery by writing shell scripts or using other methods (such as Ansible).

Sonarqube #

Since it is continuous delivery, code analysis is definitely essential. The series of versions mentioned above only compile and build the code. The normal process also requires code quality analysis, which is where the previously built Sonarqube platform comes in. There is a detailed introduction to Sonarqube in the previous articles, so I won’t repeat it here. Next, let’s take a look at how to use the sonar-scanner command in a pod.

The sonar-scanner command can be used with the Sonarqube scanner command tool configured in the Jenkins UI or by referencing this command through a custom Sonarqube scanner container image. Below, I will introduce these two methods separately.

First, let’s take a look at how to use the sonar-scanner tool configured in the Jenkins system (pod template configuration is the same as above) to perform code quality analysis.

def project_name = 'fw-base-nop'  // Project name
def registry_url = '192.168.176.155' // Image repository address

node('pre-CICD') {
    stage('build') {      
        stage('git-clone') {
            container('jnlp'){
                stage('clone code'){
                    checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                    script {
                        imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
                    }
                    echo "${imageTag}"
                }
                stage('Build a Maven project') {
                    echo "${project_name}"
                    sh "cd ${project_name} && mvn clean install -DskipTests -Pproduct -U"
                }
            }
        }

        stage('sonar'){
            def sonarqubeScannerHome = tool name: 'sonar-scanner-4.2.0'
            withSonarQubeEnv(credentialsId: 'sonarqube') {
                sh "${sonarqubeScannerHome}/bin/sonar-scanner -X "+
                "-Dsonar.login=admin " +
                "-Dsonar.language=java " + 
                "-Dsonar.projectKey=${JOB_NAME} " + 
                "-Dsonar.projectName=${JOB_NAME} " + 
                "-Dsonar.projectVersion=${BUILD_NUMBER} " + 
                "-Dsonar.sources=${WORKSPACE}/fw-base-nop " + 
                "-Dsonar.sourceEncoding=UTF-8 " + 
                "-Dsonar.java.binaries=${WORKSPACE}/fw-base-nop/target/classes " + 
                "-Dsonar.password=admin " 
            }
        }
        
        withDockerRegistry(credentialsId: 'auth_harbor', url: 'http://192.168.176.155') {
            stage('build and push docker image') {
                sh "cp /tmp/Dockerfile ${project_name}/target/"
                def customImage = docker.build("${registry_url}/library/${project_name}-${BUILD_NUMBER}:${imageTag}","--build-arg jar_name=${project_name}.jar ${project_name}/target/")
                echo "推送镜像"
                customImage.push()
            }
            stage('delete image') {
                echo "删除本地镜像"
                sh "docker rmi -f ${registry_url}/library/${project_name}-${BUILD_NUMBER}:${imageTag}"
            }
        }
    }
}

This example uses the tool instruction to specify the Sonarqube Scanner environment variable configured in the Jenkins “Global Tool Configuration” and assigns it to a variable. The withSonarQubeEnv step generator authenticates with Sonarqube and performs the code analysis operation. The explanation of the various parameters of the sonar-scanner command is described in the “Basic Tool Configuration” section and will not be repeated here.

Using a custom Sonarqube scanner image

In addition to using the Sonarqube scanner tool configured in the Jenkins system, you can also customize a Sonarqube scanner image based on the tool and use this image to start a container for code analysis.

By default, the Sonarqube scanner image can be obtained from Docker Hub. However, to avoid unnecessary errors (encountered during testing), I have chosen to customize the image. You can use the previously defined jnlp-agent image as the base image, so that one image can complete all the work in the continuous delivery process. Some people feel that this makes the image bloated, so you can also use a small image that includes JDK as the base image. Of course, you can also customize it according to your actual situation.

Regardless of which image is used, you need to modify the value of the use_embedded_jre parameter in the ${SONAR-SCANER_HOME}/bin/sonar-scanner file. By default, this parameter is set to true, and it needs to be changed to false, as shown below:

use_embedded_jre=false

Why do we need to modify this parameter? Because when the use_embedded_jre parameter is set to true, the Sonarqube scanner command will use its own provided JRE by default, located at $sonar_scanner_home/jre, instead of using the JRE in the system environment. Therefore, you need to change this parameter to false, otherwise, it will report an error of not finding the Java environment variable, as shown below:

sonar-scanner:exec: line 73: xxx/sonar-scanner-4.2.0-linux/jre/bin/java:  not found

After modifying it, edit the Dockerfile. For example, use the previously built jnlp-agent image as the base image.

FROM 192.168.176.155/library/jenkins-slave:latest
COPY sonar-scanner-4.2.0/ /opt/sonar-scanner-4.2.0/

ENV SONAR_SCANNER_HOME /opt/sonar-scanner-4.2.0/
ENV SONAR_RUNNER_HOME ${SONAR_SCANNER_HOME}
ENV PATH $PATH:${SONAR_SCANNER_HOME}/bin

If you want to use the OpenJDK image as the base image, you can modify the base image to openjdk:8-jre-alpine3.7, and then edit the build process accordingly. Test using the docker command.

docker run -it --rm 192.168.176.155/library/jenkins-slave:sonar sonar-scanner

If the following information is output, it means the image has been successfully built.

$ docker run -it --rm a9592584d82d sonar-scanner
INFO: Scanner configuration file: /opt/sonar-scanner-4.2.0/conf/sonar-scanner.properties
INFO: Project root configuration file: NONE
INFO: SonarQube Scanner 4.2.0.1873
INFO: Java 1.8.0_212 IcedTea (64-bit)
INFO: Linux 5.4.13-1.el7.elrepo.x86_64 amd64
INFO: User cache: /root/.sonar/cache
INFO: SonarQube server 6.7.5

......

After building the image, make a simple modification to the pipeline script. To differentiate the display, add a new containerTemplate configuration to define the sonar image. Also, add a container instruction to reference the new image as shown below:

def project_name = 'fw-base-nop'  //Project name
def registry_url = '192.168.176.155' //Image repository address

podTemplate(cloud: 'kubernetes',namespace: 'default', label: 'pre-CICD',
  serviceAccount: 'default', containers: [
  containerTemplate(
      name: 'jnlp',
      image: "192.168.176.155/library/jenkins-slave:latest",
      args: '${computer.jnlpmac} ${computer.name}',
      ttyEnabled: true,
      privileged: true,
      alwaysPullImage: false,
    ),
  containerTemplate(
      name: 'sonar',
      image: "192.168.176.155/library/jenkins-slave:sonar",
      ttyEnabled: true,
      privileged: true,
      command: 'cat',
      alwaysPullImage: false,
    ),
  ],
  volumes: [
       nfsVolume(mountPath: '/tmp', serverAddress: '192.168.177.43', serverPath: '/data/nfs', readOnly: false),
       hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
       nfsVolume(mountPath: '/root/.m2', serverAddress: '192.168.177.43', serverPath: '/data/nfs/.m2', readOnly: false),
  ],
){
  node('pre-CICD') {
    stage('build') {
        container('jnlp'){
            stage('clone code'){
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                script {
                    imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
                }
                echo "${imageTag}"
            }
            stage('Build a Maven project') {
                echo "${project_name}"
                sh "cd ${project_name} && mvn clean install -DskipTests -Pproduct -U"
            }
       }

        container('sonar'){
            stage('sonar test'){
                withSonarQubeEnv(credentialsId: 'sonarqube') {
                    sh "sonar-scanner -X "+
                    "-Dsonar.login=admin " +
                    "-Dsonar.language=java " + 
                    "-Dsonar.projectKey=${JOB_NAME} " + 
                    "-Dsonar.projectName=${JOB_NAME} " + 
                    "-Dsonar.projectVersion=${BUILD_NUMBER} " + 
                    "-Dsonar.sources=${WORKSPACE}/fw-base-nop " + 
                    "-Dsonar.sourceEncoding=UTF-8 " + 
                    "-Dsonar.java.binaries=${WORKSPACE}/fw-base-nop/target/classes " + 
                    "-Dsonar.password=admin " 
                }
            }
       }
       withDockerRegistry(credentialsId: 'auth_harbor', url: 'http://192.168.176.155') {
            stage('build and push docker image') {
                sh "cp /tmp/Dockerfile ${project_name}/target/"
                def customImage = docker.build("${registry_url}/library/${project_name}:${imageTag}-${BUILD_NUMBER}","--build-arg jar_name=${project_name}.jar ${project_name}/target/")
                echo "推送镜像"
                customImage.push()
            }
            stage('delete image') {
                echo "删除本地镜像"
                sh "docker rmi -f ${registry_url}/library/${project_name}:${imageTag}-${BUILD_NUMBER}"
            }
        }
    }
  }  
}

That’s it for the configuration of Jenkins integrated delivery with Kubernetes. The next section will explain how to continuously deploy the code to the Kubernetes cluster.