13 Practice of Dynamically Generating Jenkins Slave Using Docker Pipeline Plugin

13Practice of Dynamically Generating Jenkins Slave Using Docker Pipeline Plugin #

As mentioned earlier, the main purpose of using the Docker Pipeline plugin is to dynamically generate Jenkins slave nodes as the execution environment for the pipeline. In this section, we will deepen our understanding through a practical case study based on the syntax examples from the previous section.

Building the Image #

Firstly, regardless of whether you are using a single image or multiple images as the agent proxy for the pipeline stage, using any type of syntax, the first thing you need to do is to build the images that will be used for executing the pipeline scripts.

After several practices mentioned earlier, we can see that the tools used in this series of articles mainly include JDK, Git, Maven, Docker, and Ansible. Therefore, we need to customize an image that contains one or more of these tools. The Dockerfile is as follows:

FROM alpine:latest

RUN apk update && apk add openjdk8 git maven

COPY settings.xml /usr/share/java/maven-3/conf/settings.xml
COPY docker/docker /usr/bin/docker

Where:

  • The settings.xml file is the Maven configuration file, which includes the address and authentication information of the Maven private repository manager. You may consider whether to add it based on your actual situation. Other methods for customizing the use of this file will be explained in later chapters.

  • docker/docker: the Docker binary file. One important thing to note here is that if you want to build the Docker executable file into the image or mount the Docker executable file during startup, make sure to use the binary executable file and not the one generated by yum installation. Otherwise, it will show that the Docker command cannot be found when used. This is because the Docker binary file generated by yum installation must ensure that the host and the OS in the image are the same when mounting it to the container directory or building it to another image, otherwise, the Docker binary file from another platform cannot be executed.

  • The docker/docker is a Docker directory extracted from the Docker tar package file downloaded from the official website.

Build the image using the build command:

$ docker build -t alpine-cicd:latest .

After building the image, test it with the following pipeline script:

pipeline {
    agent { 
        docker{
            image 'alpine-cicd:latest'
            args '-v /var/run/docker.sock:/var/run/docker.sock'
        }
    }

    stages('build') {    
        stage('Test'){
            steps {
                sh """
                docker version
                mvn --version
                git --version
                """
            }
        }
    }
}

If the pipeline executes successfully, it means that the image has been built successfully.

Now that we have built a unified image, you can also use this unified image if there are scenarios where multiple agents need to be used. Alternatively, you can build separate images for specific tools. This will not be demonstrated here, but you can try it out yourself if you are interested.

Declarative Syntax #

After building the base image, we can now officially learn how to integrate the pipeline with the Docker plugin. In the previous chapters, we introduced a practical example of using a VM as a Jenkins slave node. Using a Docker container as a Jenkins slave node has many similarities with using a VM, such as using shared storage to save the build artifacts and using the pipeline method to authenticate private repositories. If you have learned the content of the previous chapters, learning this chapter will be relatively simple.

Agent Proxy #

First, let’s review the basic syntax for using Docker containers as agent proxies.

Using a global agent:

pipeline {
    agent { docker 'maven:3-alpine' } 
    stages {
        stage('build') {
            steps {
                sh 'mvn clean install'
            }
        }
    }
}

Using a stage level agent:

pipeline {
    agent none 
    stages {
        stage('Build') {
            agent { docker 'maven:3-alpine' } 
            steps {
                echo 'Hello, Maven'
                sh 'mvn --version'
            }
        }
        stage('Test') {
            agent { docker 'openjdk:8-jre' } 
            steps {
                echo 'Hello, JDK'
                sh 'java -version'
            }
        }
    }
}

After learning about script syntax and Docker Pipeline plugin syntax, you can also write the following to use multiple agents: launching different containers on different agent nodes.

pipeline {
    agent none 
    stages {
        stage('Build') {
            agent { node { label 'slave1' } } 

            steps {
                script{
                    docker.image('maven').inside(){
                        sh 'hostname'
                    }
                }
            }
        }
        stage('Test') {
           agent { node { label 'slave2' } } 
            steps {
                script{
                    docker.image('nginx').inside(){
                        sh 'hostname'
                    }
                }
            }
        }
    }
}

While this approach may not be common, it is still worth mentioning:

Using a global agent

To use a Docker container as an agent, simply modify the agent configuration of the agent directive. The configuration is relatively simple, as shown in the following example:

pipeline {
    agent { 
        docker{
            image 'alpine-cicd:latest'
            args '-v /var/run/docker.sock:/var/run/docker.sock -v /data/:/data/ -v /root/.docker/config.json:/root/.docker/config.json -v /root/.m2:/root/.m2'
        }
    }

    environment {
        project_name = 'fw-base-nop'
        jar_name = 'fw-base-nop.jar'
        registry_url = '192.168.176.155'
        project_group = 'base'
    }

    stage('build') {    
        stage('Pull code and package'){
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                echo "Start packaging "
                sh "cd $project_name && mvn clean install -DskipTests -Denv=beta"
            }
        }

        stage('Build image'){
            steps {
                    script{
                        jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name $jar_name |head -1").trim()
                    }
                    sh """
                    cp $jar_file /data/$project_group/$project_name/
                    """
            }
        }
        stage('Upload image'){
            steps {
                script {
                    docker.withRegistry('http://192.168.176.155', 'auth_harbor') {
                        def customImage=docker.build("${registry_url}/${project_group}/${project_name}:${env.BUILD_ID}","/data/${project_group}/${project_name}/")
                        customImage.push()
                    }
                }
            }
        }
    }
    post {
        always {
            script{
                sh 'docker rmi -f $registry_url/$project_group/$project_name:${BUILD_ID}'

                if (currentBuild.currentResult == "FAILURE" || currentBuild.currentResult == "UNSTABLE" ){
                    emailext (
                        subject: "'${env.JOB_NAME} [${env.BUILD_NUMBER}]' Build Result",
                        body: """
                        Details:\n<br>
                        Jenkins build ${currentBuild.result} '\n'<br>
                        Project Name: ${env.JOB_NAME} "\n"
                        Build ID: ${env.BUILD_NUMBER} "\n"
                        URL: ${env.BUILD_URL} \n
                        Build Log: ${env.BUILD_URL}console

                        """,
                        to: "[[email protected]](/cdn-cgi/l/email-protection)",  
                        recipientProviders: [[$class: 'CulpritsRecipientProvider'],
```groovy
[$class: 'DevelopersRecipientProvider'],
[$class: 'RequesterRecipientProvider']]
)
}else{
    echo "Build succeeded"
}
}

}
}

Instructions:

  • Compared to using “vm” as agent proxy, the core pipeline code remains unchanged in this case. The only modification is the configuration of the agent.

  • The “image” parameter specifies the name of the image built in the previous section.

  • The “args” section defines the parameters for launching the image:

    • -v /var/run/docker.sock:/var/run/docker.sock is used to mount the host machine’s docker service sock file into the container. If this file is not mounted, it will cause an error when using docker, indicating that the docker process is not running and preventing the image building process.

    • -v /data/:/data/ is used to mount the shared storage “data” directory into the container. This directory includes the project group folder, project folder, and dockerfile (customize the directory structure based on your actual situation). The compiled code will be placed in the designated project folder, and the project folder path will be used as the context path for the image building process.

    • -v /root/.docker/config.json:/root/.docker/config.json is used to mount the file from the host machine where the docker process authenticates to the private repository to the specified file inside the container. Here, it should be noted that the default user in the container during image building is “root”.

During pipeline execution, Jenkins will select a node from all available nodes and start the container. Although the image building and pushing to the private repository are performed inside the container, various configurations mostly rely on the host machine.

Using Multiple Agents

Using multiple agents is not significantly different from using a global agent. The only difference is that the value of the top-level “agent” keyword is set to “none”, and the different images are specified for different steps using the “agent” keyword.

Here is an example:

pipeline {
agent none

environment {
    project_name = 'fw-base-nop'
    jar_name = 'fw-base-nop.jar'
    registry_url = '192.168.176.155'
    project_group = 'base'
}

stages('build') {    
    stage('Clone and Package Code'){
        agent { 
            docker {
                image 'alpine-cicd:latest'
                args '-v /root/.m2:/root/.m2'
            } 
        }
        steps {
            checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
            echo "Start packaging"
            sh "cd $project_name && mvn clean install -DskipTests -Denv=beta"
        }
    }

    stage('Build Image'){
        agent { 
            docker {
                image 'alpine-cicd:latest'
                args ' -v /var/run/docker.sock:/var/run/docker.sock -v /data/:/data/'
            } 

        }
        steps {
            script{
                jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name $jar_name |head -1").trim()
            }
            sh """
            cp $jar_file /data/$project_group/$project_name/
            """
        }
    }
    stage('Upload Image'){
        agent { 
            docker {
                image 'alpine-cicd:latest'
                args ' -v /data/:/data/ -v /var/run/docker.sock:/var/run/docker.sock‘
            } 
        }
        steps {
            script {
                docker.withRegistry('http://192.168.176.155', 'auth_harbor') {
                    def customImage=docker.build("${registry_url}/${project_group}/${project_name}:${env.BUILD_ID}","/data/${project_group}/${project_name}/")
                    customImage.push()
                }
            }
        }
    }
}
post {
    always {
        node (null){
            script{
                sh 'docker rmi -f $registry_url/$project_group/$project_name:${BUILD_ID}'

                if (currentBuild.currentResult == "ABORTED" || currentBuild.currentResult == "FAILURE" || currentBuild.currentResult == "UNSTABLE" ){
                    emailext (
                        subject: "'${env.JOB_NAME} [${env.BUILD_NUMBER}]' Build Result",
                        body: """
                        Details:<br>
                        Jenkins build ${currentBuild.result} <br>
                        Project name: ${env.JOB_NAME} <br>
                        Project build id: ${env.BUILD_NUMBER} <br>
                        URL: ${env.BUILD_URL} <br>
                        Build log: ${env.BUILD_URL}console
                        """,
                        to: "[[email protected]](/cdn-cgi/l/email-protection)",  
                        recipientProviders: [[$class: 'CulpritsRecipientProvider'],
                                             [$class: 'DevelopersRecipientProvider'],
                                             [$class: 'RequesterRecipientProvider']]
                    )
                }else{
                    echo "Build succeeded"
                }

            }
        }

    }

}
}

Instructions:

  • There is one difference compared to using a global agent. In the “post” section, a node needs to be specified separately. Here, the “agent” keyword cannot be used, but you can specify a node using node('slave-label'), where the slave label can also be set to null, in which case it will automatically select a slave node for the post operations. If you want to deploy the image in this case, you can directly use a script on the host machine for deployment.

  • Another thing to pay attention to is that the code compilation and image building steps need to mount the same data directory to use the same build artifacts.

Private repository authentication #

In the previous section, the authentication of private repositories was introduced using withCredentials() and withRegistry() in the declarative script. When using a Docker container as an agent for private repository authentication in declarative syntax, you can also add additional parameters for private repository authentication in the agent directive, for example:

docker {
        image '192.168.176.155/library/maven:v2'
        args '-v $HOME/.m2:/root/.m2  -v /data/fw-base-nop:/data  -v /var/run/docker.sock:/var/run/docker.sock'
        registryUrl 'http://192.168.176.155'
        registryCredentialsId 'auth_harbor'
}

By adding the registryUrl and registryCredentialsId parameters to the container startup parameters, authentication to the private repository is performed after the container starts. This can be used not only when pulling the working image, but also when pushing the application image to the private repository after building it.

The complete configuration is as follows:

pipeline{
    agent { 
        docker {
            image '192.168.176.155/library/jenkins-slave'
            args '-v $HOME/.m2:/root/.m2  -v /data/fw-base-nop:/data -v /var/run/docker.sock:/var/run/docker.sock'
            registryUrl 'http://192.168.176.155'
            registryCredentialsId 'auth_harbor'
        }
    }
    stages {
        stage('clone') {
            steps('clone code'){
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                script {
                  imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
                }
            }
        }

        stage("mvn") {
            steps('build'){
                sh """
                    cd fw-base-nop && mvn clean install -DskipTests -Denv=beta
                    cp ${WORKSPACE}/fw-base-nop/target/fw-base-nop.jar /data/
                    hostname && pwd
                """
            }
        }
        stage("build and push to registry") {
            environment { 
                dockerfile = '/data/Dockerfile'
            }
            steps('push'){
                script {
                    docker.withRegistry('http://192.168.176.155', 'auth_harbor') {
                        def customImage=docker.build("${registry_url}/${project_group}/${project_name}:${env.BUILD_ID}","/data/${project_group}/${project_name}/")
                        customImage.push()
                    }
                }
            }
        }
    }
}

That’s all for using the Docker plugin in declarative syntax. Now let’s focus on using the Docker pipeline plugin in scripted syntax.

Scripted Syntax #

Use the scripted syntax to dynamically generate a slave (worker) node. Before we begin, let’s review the syntax for using the docker pipeline plugin to start a container, as shown below:

node {
     docker.image('maven:3.3.3-jdk-8').inside {
        git '…your-sources…'
        sh 'mvn -B clean install'
     }
}

Specify a container using the image attribute and start a container using the inside attribute.

Next, we’ll cover the use of this plugin in multiple steps:

  • Version 1: Compile the code inside the container, build the image on the host machine, and push it to the private repository.
  • Version 2: Compile and build within the container and upload to the private repository.

Version 1 #

Using script-based syntax is much simpler than using declarative syntax to start containers on specified slave nodes, for example, starting a container on the jenkins-slave1 host.

node('jenkins-slave1') {
    docker.image('192.168.176.155/library/maven:v2').inside('-v $HOME/.m2:/root/.m2')  { 
        stage('checkout'){
            checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
        }
        stage('Compile'){
            sh 'cd fw-base-nop && mvn clean install -DskipTests -Denv=beta'
            sh 'hostname && pwd'    
        }
    }
}

Explanation:

  • By specifying the label of the slave node in node(), you can start a container to work on the specified node.
  • This example starts a Maven container using the inside property on the jenkins-slave1 node. Note that even if no arguments are passed, make sure the inside property is present.

Execution logs are as follows:

From the above Jenkins execution results, we can see that after the job is executed, if the required Maven image is not available locally, it will be pulled first and then started. The method and parameters used for starting the container can be seen in the screenshot above.

Execution results are as follows:

From the above image, we can see that the container pulls the code and compiles it. After the compilation is complete, the container is automatically destroyed, so the compiled artifacts also disappear. At this time, the shared storage used in the previous chapter is particularly useful. You just need to save the compiled package to the shared storage. A simple method is to mount the shared storage directory on the host and map it to the specified directory inside the container.

Here is an example:

node('jenkins-slave1') {
    docker.image('192.168.176.155/library/maven:v2').inside('-v $HOME/.m2:/root/.m2 -v /data/base/fw-base-nop:/data')  { 
        stage('checkout'){
            checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
        }
        stage('Compile'){
            sh 'cd fw-base-nop && mvn clean install -DskipTests -Denv=beta'
            sh 'hostname && pwd'    
        }
        stage('Copy jar files'){
            sh 'cp ${WORKSPACE}/fw-base-nop/target/*.jar /data/fw-base-nop.jar'
        }
    }
}

Here, we simulate the /data/ directory as the shared storage mounted on the host. After the compilation is completed, copying the build artifacts to the /data directory inside the container is equivalent to copying them to the /data/base/fw-base-nop directory on the host. Then, create a Dockerfile in that directory in advance to build the image.

To build and push the image on the host machine, you can refer to the related operations in the previous chapter.

node('jenkins-slave1') {
    docker.image('192.168.176.155/library/maven:v2').inside('-v $HOME/.m2:/root/.m2 -v /data/base/fw-base-nop:/data')  { 
        stage('checkout'){
            checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
        }
        stage('maven'){
            sh 'cd fw-base-nop && mvn clean install -DskipTests -Denv=beta'
            sh 'hostname && pwd'    
        }
        stage('Copy jar files'){
            sh 'cp ${WORKSPACE}/xxx-management/xxx-admin/target/*.jar /data/admin.jar'
        }
    }
    stage('build and push'){
        docker.withRegistry('http://192.168.176.155','auth_harbor') {
            def dockerfile = '/data/bsse/fw-base-nop/Dockerfile'
            def customImage = docker.build("192.168.176.155/library/admin:v1", "-f ${dockerfile} /data/bsse/fw-base-nop/.")
            customImage.push()
        }
    }
}

The build results are not displayed here, you can extend and test it by yourself if interested. If your slave node has private repository authentication configured, you can omit the withRegistry method here.

Version 2 #

In the previous version, the code was compiled inside the container and then the image was built on the host machine. In the following version, the image building process will be moved inside the container.

Directly write the pipeline script:

node ('jenkins-slave1') {
    docker.withRegistry('http://192.168.176.155','auth_harbor') { 
        docker.image('alpine-cicd:latest').inside('-v $HOME/.m2:/root/.m2  -v /data/base/fw-base-nop:/data  -v /var/run/docker.sock:/var/run/docker.sock')    {  
            stage('checkout'){
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                script {
                  imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
                }
            }
            stage('打包'){
                sh 'cd fw-base-nop && mvn clean install -DskipTests -Denv=beta'

                sh 'hostname && pwd'    
            }
            stage('copy jar file'){
                sh 'cp ${WORKSPACE}/fw-base-nop/target/fw-base-nop.jar /data/'
            }
            stage('build and push'){
                    def dockerfile = '/data/Dockerfile'
                    def customImage = docker.build("192.168.176.155/library/fw-base-nop:${imageTag}", "-f ${dockerfile} /data/.")
                    customImage.push()
            }

            stage('delete image'){
                sh "docker rmi 192.168.176.155/library/fw-base-nop:${imageTag}"
            }

        }
    }
}

Explanation

  • In version 1, the withRegistry method was placed under a specific stage, but it can also be placed at the top of the pipeline script.
  • The script block is used to execute a command to obtain the short ID of the commit of the submitted code, which will be used as the version number of the image.
  • Don’t forget to add the operation to delete the built image at the end of the script. This operation is necessary because the built image will be stored on the host machine, and each image has a different tag. If there are too many builds, the number of images will accumulate and occupy a lot of space. Additionally, this image has already been uploaded to the private registry, so it doesn’t need to be saved on the host machine.
  • One thing to note is the Docker version of the Jenkins slave node. In this example, the Docker version is docker-ce-18.06. For non-CE versions, the following issues may occur:
    • /usr/bin/docker: .: line 2: can't open '/etc/sysconfig/docker'
    • You don't have either docker-client or docker-client-latest installed. Please install either one and retry.

The execution result is as follows:

Using multiple agent proxies

Configuring multiple agents in the script syntax is more flexible. For example:

Example 1: Start multiple containers on the same node to execute the pipeline.

node ('jenkins-slave1') {
    docker.withRegistry('http://192.168.176.155','auth_harbor') {
        docker.image('xxx').inside('-v $HOME/.m2:/root/.m2  -v /data/base/fw-base-nop:/data ')    {  
            stage('checkout'){
                checkout(xxxx)
                script {
                  imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
                }
            }
            stage('编译'){
                sh 'xxx'    
            }
            stage('拷贝jar包'){
                sh 'cp ${WORKSPACE}/fw-base-nop/target/fw-base-nop.jar /data/'
            }
        }

        docker.image('xxx').inside('-v /data/base/fw-base-nop:/data -v /var/run/docker.sock:/var/run/docker.sock') {
           stage('构建') {
            def dockerfile = '/data/Dockerfile'
            def customImage = docker.build("192.168.176.155/library/fw-base-nop:${imageTag}", "-f ${dockerfile} /data/.")
            customImage.push()
           }
        }
    }

}

Example 2: Perform different operations on different nodes, such as compiling code on slave1 and building and pushing images to a private registry on slave2 (this is usually achieved by using shared storage to share the working directory).

node {
    stage('working in slave1'){
        node('jenkins-slave1'){
            docker.image('alpine-cicd:latest').inside('-v $HOME/.m2:/root/.m2  -v /data/base/fw-base-nop:/data  -v /var/run/docker.sock:/var/run/docker.sock')    {
                stage('checkout in slave1'){
                    checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                    script {
                      imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
                    }
                }

                stage('build in slave1'){
                    sh 'cd fw-base-nop && mvn clean install -DskipTests -Denv=beta'

                    sh 'hostname && pwd'    
                }

                stage('copy jar file in slave1'){
                    sh 'cp ${WORKSPACE}/fw-base-nop/target/fw-base-nop.jar /data/'
                }
            }
        }
    }

    stage('working in slave169'){
        node('jenkins-slave169'){
            stage('build and push in slave169'){
                docker.withRegistry('http://192.168.176.155','auth_harbor') {
                    def dockerfile = '/data/base/fw-base-nop/Dockerfile'
                    def customImage = docker.build("192.168.176.155/library/fw-base-nop:${imageTag}", "-f ${dockerfile} /data/base/fw-base-nop/.")
                    customImage.push()
                }
            }
        }
    }
}

In this example, code compilation and image building are performed on two separate machines. It should be noted that the shared directories (e.g., the Maven private repository directory, the directory for storing build artifacts, and the directory for Dockerfiles) used by both slave nodes need to be mounted as shared storage to ensure data consistency.

The build results are not included here since they are similar in both examples.

Pipeline script from SCM #

In addition to writing pipeline scripts directly in Jenkins projects using the pipeline script method described above, you can also store the pipeline scripts in your source code repository using the Pipeline script from SCM method.

To do this, you need to write the pipeline script in a file called Jenkinsfile, commit it to your source code repository, and then configure your Jenkins project to use Pipeline script from SCM, specifying the repository URL, username, and password. - - Save the configuration. When using this method, it is recommended to use declarative syntax to write the script.

Exception Handling #

Regarding exception handling, it has been covered in the previous chapter and the pipeline practical chapter. In the following content, you can directly reuse it.

First, let’s review the syntax for using the try/catch/finally statements in the scripted syntax:

node ('jenkins-slave1') {        
    try {
        stage('exec command'){
            sh 'hostname && pwd'
            currentBuild.result = 'SUCCESS'
        }
    } catch (e) {
        currentBuild.result = 'FAILURE'
        def errorMsg = "================== Error Message START ===================\n"
        errorMsg += "${e.toString()}|${e.message}\n"
        errorMsg += "==================== Error Message END ======================"
        echo "Error message is ${errorMsg}"
    } finally {
        if (currentBuild.result == 'SUCCESS') {
            echo "---currentBuild.result is:${currentBuild.result}---"
        }
        else {
            echo "---currentBuild.result is:${currentBuild.result}---"
        }
    }
}

Explanation

  • The try statement contains the tasks to be executed. The try statement must be followed by either a catch or finally statement. Both catch and finally statements are not mandatory, but when there is a try statement, one of these statements must be present.
  • In this example, different operations are performed based on the result of the job.

Based on the syntax above, you can incorporate the continuous delivery code from above into this template, as shown below:

node ('jenkins-slave1') {

    docker.withRegistry('http://192.168.176.155','auth_harbor') {
        docker.image('192.168.176.155/library/jenkins-slave').inside('-v $HOME/.m2:/root/.m2  -v /data/fw-base-nop:/data  -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker')    {  
            stage('checkout'){
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false,  userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                script {
                  imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
                }
            }

            try {
                stage('maven'){
                    sh 'cd fw-base-nop && mvn clean install -DskipTests -Denv=beta'
                    sh 'hostname && pwd'
                    currentBuild.result = 'SUCCESS'
                }
            } catch (e) {
                currentBuild.result = 'FAILURE'
                def errorMsg = "================= Error Message START ======================\n"
                errorMsg += "${e.toString()}|${e.message}\n"
                errorMsg += "=================== Error Message END ======================"
                echo "${errorMsg}"
                emailext (
                    subject: "'${env.JOB_NAME} [${env.BUILD_NUMBER}]' Build Exception",
                    body: """
                    Details:\n
                    Failure: Job ${env.JOB_NAME} [${env.BUILD_NUMBER}]\n
                    Status: ${env.JOB_NAME} jenkins build exception\n
                    URL: ${env.BUILD_URL}\n
                    Project Name: ${env.JOB_NAME}\n
                    Build ID: ${env.BUILD_NUMBER}\n
                    Error Message: ${errorMsg}\n
                    
                    """,
                    to: "[[email protected]](/cdn-cgi/l/email-protection)",  
                    recipientProviders: [[$class: 'DevelopersRecipientProvider']]
                )
            }
            if (currentBuild.result == 'SUCCESS') {
                echo "---currentBuild.result is:${currentBuild.result}------"
                stage('Copy Jar'){
                    sh 'cp ${WORKSPACE}/fw-base-nop/target/fw-base-nop.jar /data/'
                }
                stage('Build and Push'){
                    def dockerfile = '/data/Dockerfile'
                    def customImage = docker.build("192.168.176.155/library/fw-base-nop:${imageTag}", "-f ${dockerfile} /data/.")
                    customImage.push()
                }
            } else {
                echo "---currentBuild.result is:${currentBuild.result}---"
            }
        }
    }
}

Explanation:

In the above code snippet, the emailext method is used to send an email notification in the specified step. Different email templates can be set for different steps to quickly locate any exceptions.

currentBuild is a default global variable in Jenkins that contains multiple methods. You can find them in ${YOUR_JENKINS_URL}/pipeline-syntax/globals#env.

With the try statement, you can catch the execution result of each stage in the pipeline and configure it flexibly. The above example is just for demonstration purposes.

This concludes the content on continuous delivery using the pipeline and Docker plugins. The next section will explain the configuration of integrating Jenkins with Kubernetes.