11 Pipeline Syntax for Continuous Delivery and Basic Practices

11Pipeline Syntax for Continuous Delivery and Basic Practices #

In the previous two sections, we introduced the basic syntax for using declarative pipelines and script pipelines. In this section, we will deepen our understanding of the previous two sections through some basic practices. Since everyone’s actual situation and level of understanding may be different, the author will demonstrate the learning content from the most basic level and try to showcase all the syntax introduced in the previous pipeline chapters through examples, by continuously optimizing the code.

Continuous Delivery and Basic Practices using Declarative Syntax #

Agent #

Some companies may not meet the technical requirements to use containers as pipeline agent proxies, but this does not prevent the use of pipeline scripts to compile code and build images on virtual machines. So in this section, the agent proxies used are all virtual machines. The introduction of using containers as agent proxies will be discussed in later sections.

There is nothing special to note when configuring agent proxies. Simply configure the label or name of the proxy node to be used using the agent{} directive.

Let’s start this section with a basic example, like executing a shell command on the master host:

pipeline{
    agent { node { label 'master'} }
    stages{
        stage('test'){
            steps{
                sh "hostname"
            }
        }
    }
}

Note that when writing declarative scripts, the description of the stage in stage() is required. Although it can be custom content, if it is not added, an error will occur.

The entire operation of this pipeline script is performed on the jenkins-slave1 node, so the agent is defined as follows:

agent { node { label 'jenkins-slave1' } }

Using the agent directive is relatively simple.

Code Fetch and Compilation #

Below is an example of creating a Jenkins Job of the pipeline type, and writing the pipeline script as follows:

pipeline {
    agent { node { label 'jenkins-slave1' } }
    stages {    
        stage('Code Fetch and Compilation'){
            steps {
                sh "git clone http://root:[[email protected]](/cdn-cgi/l/email-protection)/root/base-nop.git"
                echo "Compilation in progress"
                sh 'source /etc/profile && cd fw-base-nop && mvn clean install -DskipTests -Denv=beta'    
            }
        }
    }
}

Using the most basic code fetch and compile operations with only two commands and using plaintext username and password for code fetch, obviously doesn’t meet the requirements.

With the use of the snippet generator introduced in the “Pipeline Syntax” section, the code fetch operation can be replaced with a syntax snippet (as mentioned earlier).

As shown below:

checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])

Similarly, for compiling the code using the mvn command, if there is an issue with the Maven command not found or the JDK environment variable not found, apart from using the shell command to update the environment variable, you can also use the tools directive to define the environment variables for these tools.

For example:

tools {
      maven 'maven-3.5.4'
      jdk 'jdk-1.8'
}

For code compilation, if you don’t want to use the cd command to enter a specific directory for compilation, the pipeline provides the ws directive to specify the directory for operation. So the compilation steps mentioned above can also be written as follows:

ws('directory'){
    mvn clean install
}

So the initial script can be modified as follows:

pipeline {
    agent { node { label 'jenkins-slave1' } }
    tools {
      maven 'maven-3.5.4'
      jdk 'jdk-1.8'
    }
    stages {    
        stage('Code Fetch and Compilation'){
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])

                echo "Start packaging"

                ws("$WORKSPACE/fw-base-nop"){
                    sh 'mvn clean install -DskipTests -Denv=beta' 
                }    
            }
        }
    }
}

Note:

  • The base path for the cd command is the current workspace (${WORKSPACE}) path.
  • The directory used in the ws directive needs to be an absolute path.
  • If you want to test the local host or the local path within the pipeline, you can use the hostname command or the pwd command to obtain that information.

Using Sonar #

Since it is continuous delivery, code quality analysis is definitely necessary. The above example only compiles the code, and the normal process requires code quality analysis, which is where the previously set up SonarQube platform comes into play. The chapter “Jenkins Plugin Installation” discussed how to use the code quality analysis tool SonarQube. I won’t repeat it here. Next, let’s take a look at how to use SonarQube and sonar-scanner in a pipeline script.

In this chapter, we use nodes based on virtual machines, so the use of the sonar-scanner tool introduced here is based on the sonar-scanner command tool configured in the Jenkins UI.

In the previous chapters, we introduced the commands and parameters of sonar-scanner. When using it in a pipeline, we only need to generate the syntax segment of sonar-scanner based on the code fragments. The following example shows this:

After selecting the credentials, generate the syntax segment, and then paste the sonar-scanner command and parameters used in the previous chapters into the withSonarQubeEnv(){} code block, as shown below:

stage('sonar'){
    steps{
        script{
            def sonarqubeScannerHome = tool name: 'sonar-scanner-4.2.0'
            withSonarQubeEnv(credentialsId: 'sonarqube') {
                sh "${sonarqubeScannerHome}/bin/sonar-scanner -X "+
                   "-Dsonar.login=admin " +
                   "-Dsonar.language=java " + 
                   "-Dsonar.projectKey=${JOB_NAME} " + 
                   "-Dsonar.projectName=${JOB_NAME} " + 
                   "-Dsonar.projectVersion=${BUILD_NUMBER} " + 
                   "-Dsonar.sources=${WORKSPACE}/fw-base-nop " + 
                   "-Dsonar.sourceEncoding=UTF-8 " + 
                   "-Dsonar.java.binaries=${WORKSPACE}/fw-base-nop/target/classes " + 
                   "-Dsonar.password=admin " 
           }
       }
    }

}

This configures the use of SonarQube and Sonar-scanner in the pipeline. The “sonar-scanner” command will be automatically executed after the code compilation step is completed.

Building and Pushing Docker Images #

Basic Version #

The previous example completed the code compilation using the pipeline script. Now let’s take a look at the steps to build the code into an image and push it to a private repository. First, let’s take a look at the basic code.

stage('Build Image'){
    steps {
        sh "cp $WORKSPACE/fw-base-nop/target/fw-base-nop.jar /data/fw-base-nop/"
        sh "cd /data/fw-base-nop && docker build -t fw-base-nop:${BUILD_ID} ."
    }
}

stage('Push Image'){
    steps {
        sh "docker tag fw-base-nop:${BUILD_ID} 192.168.176.155/library/fw-base-nop:${BUILD_ID}"
        sh "docker login 192.168.176.155 -u admin -p da88e43d88722c2c9ca09da644eeb015"
        sh "docker push 192.168.176.155/library/fw-base-nop:${BUILD_ID}"
    }
}

Although this basic code achieves our goal, for young people with ideas, they may not accept the pure command-based operations in the example. Some steps can still be optimized. For example:

Building Image Stage:

  1. The reference to the build artifact (jar file) uses the cp command to copy it to the specified directory on the agent agent node. If we want to use this pipeline template when adding other projects, we have to frequently modify the jar file path and name. So here, we need to simplify it.

  2. The command to build the image can also be directly achieved by the command docker build -t fw-base-nop:${BUILD_ID} /data/fw-base-nop.

  3. The Dockerfile file needs to be preplaced in the specified directory (in this example, /data/fw-base-nop/). You can refer to the Dockerfile in Chapter 4.

Push Image Stage:

  1. Before pushing the image to the specified private repository, we need to add a tag to the image built in the previous step. While building the image, we can directly build it into the desired format with the tag command.

  2. Authentication of the private repository is also needed before pushing the image. These simple steps require several commands, as shown in the example.

For the problems listed above, some steps can be optimized using plugins or instructions. For example, use the find command to find the built artifact from the current path and directly build the image with the name in the format of Docker Registry address/repository name:tag. The code can be modified as follows:

stage('Build Image'){
    steps {
        script{
            jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name fw-base-nop.jar |head -1").trim()
        }
        sh """
            cp $jar_file /data/fw-base-nop/
            docker build -t 192.168.176.155/library/fw-base-nop:${BUILD_ID} /data/fw-base-nop/.
        """
    }
}

Use the withCredentials method to authenticate the private repository:

stage('Push Image'){
    steps {
        withCredentials([usernamePassword(credentialsId: 'auth_harbor', passwordVariable: 'dockerHubPassword', usernameVariable: 'dockerHubUser')]) {
            sh """
                docker login -u ${env.dockerHubUser} -p ${env.dockerHubPassword} 192.168.176.155
                docker push 192.168.176.155/library/fw-base-nop:${BUILD_ID}
            """
        }
    }
}

Since the image build operation is performed on the host of the virtual machine, in order to avoid excessive number of job executions resulting in a large number of built images being saved on the host machine, the last step is to delete the images. For example:

stage('Delete Local Image'){
    steps{
        sh "docker rmi -f  192.168.176.155/library/fw-base-nop:${BUILD_ID}"
    }
}

With the above changes, the complete pipeline script is as follows:

pipeline {
    agent { node { label 'jenkins-slave1'}}
    tools {
        maven 'maven-3.5.4'
        jdk 'jdk-1.8'
    }
    stages {
        stage('Clone and Compile'){
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                echo "Start compilation"
                ws("${WORKSPACE}/fw-base-nop"){
                    sh 'mvn clean install -DskipTests -Denv=beta' 
                }    
            }
        }
        stage('Build Image'){
            steps {
                script{
                    jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name fw-base-nop.jar |head -1").trim()
                }
                sh """
                cp $jar_file /data/fw-base-nop/
                docker build -t 192.168.176.155/library/fw-base-nop:${BUILD_ID} /data/fw-base-nop/.
                """
            }
        }

        stage('Push Image'){
            steps {
                withCredentials([usernamePassword(credentialsId: 'auth_harbor', passwordVariable: 'dockerHubPassword', usernameVariable: 'dockerHubUser')]) {
                    sh "docker login -u ${env.dockerHubUser} -p ${env.dockerHubPassword} 192.168.176.155"
                    sh 'docker push 192.168.176.155/library/fw-base-nop:${BUILD_ID}'
                }
            }
        }

        stage('Delete Local Image'){
            steps{
                sh "docker rmi -f  192.168.176.155/library/fw-base-nop:${BUILD_ID}"
            }
        }
    }
}

The execution results are as follows:

screenshot screenshot screenshot

This completes the basic version of the pipeline script.

Advanced Version #

Although the script is complete, reviewing the syntax introduced in the previous pipeline section, we can still find some issues and optimize some steps. The code for certain steps can be reduced.

For example:

  1. In the Image Build step, the directory where the Dockerfile and jar package are stored can be mounted to the directory specified by the agent node using shared storage. This way, if the Dockerfile or the path to copy the image to a specified directory changes, it only needs to be modified once. The shared storage can be accessed by any slave node, without limiting the job to a fixed agent node.

  2. For the “Upload Image” step, besides using the withCredentials method for authentication to a private repository, authentication can also be done directly on the agent node where the private repository is specified. This way, the authentication operation to the private repository can be skipped when uploading the image.

  3. The deletion operation in the “Delete Local Image” step can be directly placed in the “Upload Image” step.

Below are the optimizations made for the mentioned issues.

  1. The use of shared storage and authentication to a private repository is not reflected in the code. Here, I’m using NFS to simulate shared storage, and the directory remains the same.

  2. Authentication to the repository can be done by using the docker login command on the agent node directly. This way, the upload image and delete image stages can be combined into one. Additionally, to standardize the steps, the image building step in the code compilation step is also placed in the upload image stage.

The modified script is as follows:

stage('Upload and Delete Local Image') {
    steps {
      sh """
      docker build 192.168.176.155/library/fw-base-nop:${BUILD_ID}
      docker push 192.168.176.155/library/fw-base-nop:${BUILD_ID}
      docker rmi -f  192.168.176.155/library/fw-base-nop:${BUILD_ID}
      """        
    }
}

For cases where there are a large number of projects, and you want to manage the projects in groups and use the same pipeline script as a template, you can set the common configurations used in this pipeline script as variables. Each time a new project is created, only the values of the variables need to be changed.

For example:

  • The project name, the name of the artifact built in the project, and the project group can be set as variables.
  • The address of the private repository and the project group can also be configured using variables.
  • The version of the image can also be configured using variables.

According to the optimization ideas listed above, the following is the modified script:

pipeline {
    agent { node { label 'jenkins-slave1' }}
    tools {
      maven 'maven-3.5.4'
      jdk 'jdk-1.8'
    }
    environment {
        project_name = 'fw-base-nop'
        jar_name = 'fw-base-nop.jar'
        registry_url = '192.168.176.155'
        project_group = 'base'
    }
    stages {    
        stage('Checkout and Build') {
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                echo "Start packaging"
                ws("${WORKSPACE}/fw-base-nop") {
                    sh 'mvn clean install -DskipTests -Denv=beta' 
                } 
            }
        }
        stage('Build Image') {
            steps {
                script {
                    jar_file = sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name $jar_name |head -1").trim()
                }
                sh """
                cp $jar_file /data/$project_group/$project_name/
                docker build -t $registry_url/$project_group/$project_name:${BUILD_ID} /data/$project_group/$project_name/.
                """
            }
        }
        stage('Upload Image') {
            steps {
                sh """
                docker push $registry_url/$project_group/$project_name:${BUILD_ID}
                docker rmi -f $registry_url/$project_group/$project_name:${BUILD_ID}
                """
            }
        }
    }
}

The configuration is simple - just learn to use the environment keyword.

In addition to using the environment parameter, you can also pass parameters during job builds to replace some variables. This requires the use of the parameters directive we learned before.

For example, add a string type parameter and a choice type parameter for the jar package name and project group, respectively.

The code below shows how to add parameters:

parameters {
  string defaultValue: 'fw-base-nop.jar', description: 'The name of the jar package, must end with .jar', name: 'jar_name', trim: false
  choice choices: ['base', 'open', 'tms'], description: 'Project group to which the service belongs', name: 'project_group'
}

Since we have defined external parameters to pass variables, the two variables defined using the environment directive are no longer necessary.

This way, when executing the pipeline job, it becomes a parameterized build type job, as shown in the following figure:

Jenkins parameterized build

Just enter the desired parameter values during the build. I personally find the use of parameters to be a bit cumbersome. It may not be suitable for all scenarios, so the use of the parameters directive should be considered based on the actual situation.

Final Version #

In this version, there are no significant changes to the advanced version of the example. Only the Docker-related operations are executed through the Docker Pipeline plugin. The content related to the Docker Pipeline plugin has not been introduced yet, so this example will provide a simple demonstration.

The code changes for the stages are as follows:

stages {    
    stage('Code Pull and Compile'){
        steps {
            checkout([$class: 'GitSCM', branches: [[name: "*/master"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])

            echo "Compiling"

            ws("${WORKSPACE}/fw-base-nop"){
                sh ' mvn clean install -DskipTests -Denv=beta' 
            } 

            script{
                jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name ${jar_name} |head -1").trim()
            }

            sh """
            cp $jar_file /data/${project_group}/$project_name/
            """
        }
    }

    stage('Build and Upload Image'){
        steps {
            script {
                def customImage = docker.build("$registry_url/${project_group}/$project_name:${BUILD_ID}", "/data/${project_group}/$project_name/.")
                customImage.push()
            }

            sh """
            docker rmi -f $registry_url/${project_group}/$project_name:${BUILD_ID}
            """
        }
    }
}

In this way, the build and push methods of the Docker Pipeline plugin are used to easily implement image building and pushing. From the above script, it can be seen that the script{} block is used when using plugin methods because most plugins in Jenkins are adapted to script syntax. For declarative syntax, the script block needs to be used to declare the part of the functionality that uses script syntax.

The usage syntax of the Docker Pipeline plugin will be explained in detail in future chapters, but for now, a simple demonstration is provided.

Usage of Multiple Agents #

For agent nodes with clear divisions of labor, such as agent proxy nodes that can only perform code compilation operations, or agent proxy nodes that can only perform Docker image building operations; or for the same project with code written in different scripting languages which require different compilation tools for building, the multiple agent building method can be used to perform project building operations using different agent nodes in different steps.

Using the above code as an example, the master node is used for code pulling and compilation, and the jenkins-slave1 node is used for image building and pushing. The code is as follows:

pipeline {
    agent none
    tools {
        maven 'maven-3.5.4'
        jdk 'jdk-1.8'
    }
    environment {
        project_name = 'fw-base-nop'
        jar_name = 'fw-base-nop.jar'
        registry_url = '192.168.176.155'
        project_group = 'base'
    }
}

stages {    
    stage('Code Pull and Package'){
        agent { node { label 'master'} }

        steps {
            checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
            echo "Start packaging"
            ws("${WORKSPACE}/fw-base-nop"){
                sh ' mvn clean install -DskipTests -Denv=beta' 
            }

            script{
                jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name $jar_name |head -1").trim()
            }
            sh "scp $jar_file root@jenkins-slave1:/data/$project_group/$project_name/"
        }
    }

    stage('Upload Image'){
        agent { node { label 'jenkins-slave1'}}
        steps {
            script {
                def customImage = docker.build("$registry_url/${project_group}/$project_name:${BUILD_ID}", "/data/${project_group}/$project_name/.")
                customImage.push()
            }

            sh """
            docker rmi -f $registry_url/${project_group}/$project_name:${BUILD_ID}
            """
        }
    }
}

Note:

  1. When using multiple agent nodes for building, the topmost agent needs to be specified as none.
  2. Since different agent nodes are used, the build artifacts need to be copied from the master host to the jenkins-slave1 host. Compared to using the same agent, this can be considered a more complex operation. However, if shared storage or the same agent is used, this should not be a problem.
  3. Different tools are used for code compilation and image building for different types of operations, so only two stages are needed to complete all operations.

Exception Handling #

When executing tasks using the pipeline script, it is inevitable to encounter exceptions during execution. After an exception occurs, the pipeline will automatically terminate and exit. In order to obtain the result or status of the pipeline execution in real time, we need to add exception handling code to the script.

Previously, we introduced that in declarative syntax, the “post” directive is used to perform next-step processing operations based on the task execution status. Let’s take a look at the application of “post” in this example.

post {
    always {
        echo "xxx"
    }
    success {
        emailext (
            subject: "'${env.JOB_NAME} [${env.BUILD_NUMBER}]' Update Successful",
            body: """
            Details:<br>
            Jenkins build ${currentBuild.result} <br>
            Project name: ${env.JOB_NAME} <br>
            Project build ID: ${env.BUILD_NUMBER} <br>
            URL: ${env.BUILD_URL} <br>
            Build log: ${env.BUILD_URL}console
            """,
            to: "[[email protected]](/cdn-cgi/l/email-protection)",
            recipientProviders: [[$class: 'CulpritsRecipientProvider'],
                                 [$class: 'DevelopersRecipientProvider'],
                                 [$class: 'RequesterRecipientProvider']]
        )
    }
    failure {
        echo "Build log: ${env.BUILD_URL}console"
    }
    unstable {
        echo "Build log: ${env.BUILD_URL}console"
    }
    changed {
        echo "changed"
    }
}

Place the “post” directive under the global “stages” step, and this “post” directive sends email notifications to administrators for multiple status results of the pipeline script execution using the “emailext” plugin. The email content can be defined based on the actual situation and can refer to the example above.

If you think it is cumbersome to handle each status result separately, you can group the handling according to the execution status of the job. For example, if you divide the status into two types: success and failure, you can modify the script as follows:

post {
    always {
        script {
            sh 'docker rmi -f $registry_url/$project_group/$project_name:${BUILD_ID}'
            if (currentBuild.currentResult == "ABORTED" || currentBuild.currentResult == "FAILURE" || currentBuild.currentResult == "UNSTABLE"){
                emailext (
                    subject: "'${env.JOB_NAME} [${env.BUILD_NUMBER}]' Build Result",
                    body: """
                    Details:\n<br>
                    Jenkins build ${currentBuild.currentResult} '\n'<br>
                    Project name: ${env.JOB_NAME} "\n"
                    Project build ID: ${env.BUILD_NUMBER} "\n"
                    URL: ${env.BUILD_URL} \n
                    Build log: ${env.BUILD_URL}console
                    """,
                    to: "[[email protected]](/cdn-cgi/l/email-protection)",
                    recipientProviders: [[$class: 'CulpritsRecipientProvider'],
                                         [$class: 'DevelopersRecipientProvider'],
                                         [$class: 'RequesterRecipientProvider']]
                )
            } else {
                echo "Build succeeded"
            }
        }
    }
}

Use conditional statements to judge the execution result and handle them separately. For more information about the “currentBuild” variable, please refer to the “Global Variables” page and the “currentbuild” method.

Parallel Execution #

In actual work, you may encounter situations where multiple jobs need to be executed in parallel. For example, multiple application services in the same project group may depend on the same or multiple basic services. In this case, if you need to build the application services, you may need to prioritize building the basic services. This can be achieved through parallel builds.

For example, let’s say we have the basic services service1 and service2, and the application services app1, app2, and app3. In some cases, the development team may submit code changes for the basic services (service1 and/or service2) as well as the application services (app1, app2, and app3) that need to be built. In such cases, manually building service1 and service2 first and then manually building the three application services can be cumbersome.

To solve these kinds of problems, we can use the Parallel keyword in the pipeline. Here is the code example:

pipeline {
    agent any
    parameters {
        extendedChoice description: 'Build basic services', descriptionPropertyValue: 'base_service1,base_service2', multiSelectDelimiter: ',', name: 'base_service', quoteValue: false, saveJSONParameterToFile: false, type: 'PT_CHECKBOX', value: 'service1,service2', visibleItemCount: 2
        extendedChoice description: 'Build application services', descriptionPropertyValue: 'app_service1,app_service2,app_service3', multiSelectDelimiter: ',', name: 'app_service', quoteValue: false, saveJSONParameterToFile: false, type: 'PT_CHECKBOX', value: 'app1,app2,app3', visibleItemCount: 3
    }

    stages {
        stage('deploy') {
            parallel {
                stage('deploy service1') {
                    when {
                        expression {
                            return params.base_service =~ /service1/
                        }
                    }
                    steps {
                        sh "echo build job base_service1"
                        // build job: 'service1'
                    }
                }
                stage('deploy service2') {
                    when {
                        expression {
                            return params.base_service =~ /service2/
                        }
                    }
                    steps {
                        sh "echo build job base_service2"
                        // build job: 'service2'
                    }
                }
            }
        }

        stage('build job') {
            parallel {
                stage('Build application service 1') {
                    when {
                        expression {
                            return params.app_service =~ /app1/
                        }
                    }

                    steps('build') {
                        sh "echo build job app1"
                        // build job: 'app1'
                    }
                }
                stage('Build application service 2') {
                    when {
                        expression {
                            return params.app_service =~ /app2/
                        }
                    }

                    steps('build') {
                        sh "echo build job app2"
                        // build job: 'app2'
                    }
                }
                stage('Build application service 3') {
                    when {
                        expression {
                            return params.app_service =~ /app3/
                        }
                    }

                    steps('build') {
                        sh "echo build job app3"
                        // build job: 'app3'
                    }
                } 
            }
        }
    }
}

Explanation:

  1. In this example, the command to build a job is replaced with a shell command. In actual work, you would need to use the build job command to build a job. The commented part in the example contains the code to build a job.

  2. In addition to using the parameters directive to specify parameters, you can also customize parameters by checking the ‘This project is parameterized’ option in the ‘General’ step. This is explained in previous chapters and will not be repeated here.

This concludes the practical application of using declarative syntax for continuous delivery and deployment.

Practical Application of using Scripted Syntax for Continuous Delivery and Deployment #

Through the learning of the declarative syntax chapter, the process of code compilation and image building should not need to be repeated. This section is similar to the practical application of using the declarative syntax, where we use a virtual machine as the agent node to execute the pipeline script. Some repetitive content will be briefly mentioned, but if you have any questions, please feel free to contact the author.

Node Proxy #

Declarative syntax uses the agent keyword to use a proxy, while scripting syntax uses the node keyword to use a proxy.

Based on the process and examples in the declarative syntax, the basic example code using the scripting syntax is as follows:

node('jenkins-slave1'){
    stage('Code Compilation'){
        checkout([$class: 'GitSCM',
                  branches: [[name: '*/master']],
                  doGenerateSubmoduleConfigurations: false,
                  extensions: [],
                  submoduleCfg: [],
                  userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772',
                                      url: 'http://192.168.176.154/root/base-nop.git']]])
        echo "Starting Compilation"
        ws("$WORKSPACE/fw-base-nop"){
            sh 'mvn clean install -DskipTests -Denv=beta' 
        } 
    }

    stage('Image Building'){
        script{
            jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name fw-base-nop.jar |head -1").trim()
        }
        sh """
        cp $jar_file /data/base/fw-base-nop/
        docker build -t 192.168.176.155/library/fw-base-nop:${BUILD_ID} /data/base/fw-base-nop/.
        """
    }
}

Code Compilation #

In the basic example above, we have implemented a simple operation of pulling code, compiling, and building an image using script-style syntax. Based on the examples in the declarative script, we will now implement part of the code using script-style commands. We have already discussed possible issues and related optimization solutions earlier, so we won’t repeat them here. Just focus on the changes in the script.

Definition of Tools #

For example, when defining environment variables for tools used during code compilation, in script syntax, you can use the tool and withEnv directives to define the environment variables for the JDK and Maven tools, replacing the source /etc/profile command. The code is shown below:

    def jdk = tool name: 'jdk-1.8'
    env.PATH = "${jdk}/bin:${env.PATH}"
    stage('代码编译'){
        checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
        echo "开始打包 "
        withEnv(["PATH+MAVEN=${tool 'maven-3.5.4'}/bin"]) {
            ws("$WORKSPACE/fw-base-nop"){
                sh ' mvn clean install -DskipTests -Denv=beta' 
            } 
        }

    }

In this script, the def directive and env command are used to define the environment variable for the JDK, and the withEnv command is used to define the environment variable for Maven.

The above method of defining variables can also be written as follows:

node {
    def jdk=tool name: 'jdk-1.8'
    def mvn=tool name:'maven-3.5.4'
    env.PATH = "${jdk}/bin:${mvn}/bin:${env.PATH}"

    stage('代码编译'){
        ......
        echo "开始打包 "
        ws("$WORKSPACE/fw-base-nop"){
            sh ' mvn clean install -DskipTests -Denv=beta' 
        } 
    }
}

Building and Pushing Image #

In the stage of building and pushing the image, you can use the docker pipeline plugin to achieve the operations of building and pushing the image.

The code is as follows:

stage('镜像构建') {
    script {
        jar_file = sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name fw-base-nop.jar |head -1").trim()
    }
    sh """
    cp $jar_file /data/base/fw-base-nop/
    """
}

stage('推送镜像到仓库') {
    def customImage = docker.build("192.168.176.155/base/fw-base-nop:${env.BUILD_ID}",'/data/base/fw-base-nop/')
    customImage.push()
}

Explanation:

  1. If the name of the Dockerfile is not “Dockerfile” or “dockerfile”, you need to specify the name of the Dockerfile with the -f parameter.
  2. The default condition for pushing the image is that the agent node has already authenticated with the private repository. If the agent node has not authenticated with the private repository, you need to add the authentication method here. Refer to the section “Authentication for Private Repository Service” below.
  3. Since the docker pipeline plugin does not have a property to delete images, you still need to use a shell command to delete the locally built image. You can use the same command as in declarative syntax. It is not demonstrated here.

Authentication for Private Repository Service:

If the agent node has not authenticated with the private repository, you can add the authentication method using the following code:

docker.withRegistry('https://your-private-repo.com', 'credentials-id') {
    // Perform the image build and push operations
    def customImage = docker.build("192.168.176.155/base/fw-base-nop:${env.BUILD_ID}",'/data/base/fw-base-nop/')
    customImage.push()
}

Make sure to replace 'https://your-private-repo.com' with the URL of your private repository and 'credentials-id' with the ID of your credentials containing the authentication information for the private repository.

Private Repository Service Authentication #

Using script syntax for private repository service authentication is easier compared to declarative syntax. You only need to use the with_registry method from the Docker Pipeline plugin to solve the authentication issue effortlessly. Below is the code:

stage("Build and Push Image"){
    docker.withRegistry('http://192.168.176.155', 'auth_harbor') {
        def customImage = docker.build("${vars.registry_url}/${vars.project_group}/${vars.project_name}:${env.BUILD_ID}", "/data/${vars.project_group}/${vars.project_name}/")
        customImage.push()
    }
}

Similar to using the build and push methods from the Docker plugin in declarative syntax, in declarative syntax, you also need to encapsulate the code with script{} block. Here is an example:

stage('Build and Upload Image'){
    steps {
        script {
            docker.withRegistry('http://192.168.176.155', 'auth_harbor') {
                def customImage = docker.build("${registry_url}/${project_group}/${project_name}:${env.BUILD_ID}", "/data/${project_group}/${project_name}/")
                customImage.push()
            }
        }
    }
}

Using Variables #

In declarative syntax, variables are defined using the environment keyword, while in scripted syntax they are defined using the def command, as shown below:

def vars=[project_name:'fw-base-nop',jar_name:'fw-base-nop.jar',registry_url:'192.168.176.155',project_group:'base']

To use the variables, reference the values using ${vars.project_name}. In this example, the configuration is as follows:

node('jenkins-slave1'){
    def jdk = tool name: 'jdk-1.8'
    env.PATH = "${jdk}/bin:${env.PATH}"
    def vars=[project_name:'fw-base-nop',jar_name:'fw-base-nop.jar',registry_url:'192.168.176.155',project_group:'base']

    stage('代码编译'){

        checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
        echo "开始打包 "
        withEnv(["PATH+MAVEN=${tool 'maven-3.5.4'}/bin"]) {
            ws("$WORKSPACE/fw-base-nop"){
                sh ' mvn clean install -DskipTests -Denv=beta' 
            } 
        }
    }

    stage('镜像构建'){
        script{
            jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name ${vars.jar_name} |head -1").trim()
        }
        sh """
        cp $jar_file /data/${vars.project_group}/${vars.project_name}/
        """
    }

    stage('推送镜像到仓库'){
        docker.withRegistry('http://192.168.176.155', 'auth_harbor') {
            def customImage=docker.build("${vars.registry_url}/${vars.project_group}/${vars.project_name}:${env.BUILD_ID}","/data/${vars.project_group}/${vars.project_name}/")
            customImage.push()
        }
    }
}

Using Multiple Agents #

The use of multiple agents was briefly introduced in the syntax section. Let’s first review the basic syntax.

node() {
    stage('test-node'){
        node('jenkins-slave1'){
            stage('test1'){
                sh 'hostname'
            }   
        }
    }
    stage('test-node2'){
        node('jenkins-slave169'){
            stage('test2'){
                sh 'hostname'
            }
        }
    }
}

To implement the core script code, simply copy the code from the declarative syntax example and place it under the stage block. It’s still relatively simple.

Exception Handling #

In the previous pipeline section, it was introduced that the script-style syntax uses the try/catch/finally keywords to catch and handle exceptions. The try instruction is quite flexible and can be used to encapsulate global stages as well as specific stages. try{} must be followed by the catch or finally keywords, and can include both at the same time if needed.

The code is as follows:

node('jenkins-slave1'){
    def jdk = tool name: 'jdk-1.8'
    env.PATH = "${jdk}/bin:${env.PATH}"
    def vars=[project_name:'fw-base-nop',jar_name:'fw-base-nop.jar',registry_url:'192.168.176.155',project_group:'base']

    try{
        stage('Code Compilation'){

            checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])

            echo "Start packaging "
            withEnv(["PATH+MAVEN=${tool 'maven-3.5.4'}/bin"]) {
                sh "cd ${vars.project_name}/ && mvn clean install -DskipTests -Denv=beta"
            }   
        }
        stage('Image Building'){
            script{
                jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name ${vars.jar_name} |head -1").trim()
            }
            sh """
            cp $jar_file /data/${vars.project_group}/${vars.project_name}/
            """
        }

        stage('Push Image to Repository'){
            def customImage=docker.build("${vars.registry_url}/${vars.project_group}/${vars.project_name}:${env.BUILD_ID}","/data/${vars.project_group}/${vars.project_name}/")
            customImage.push()
        }
    }catch(all){
        currentBuild.result = 'FAILURE'
    }
    if(currentBuild.currentResult == "ABORTED" || currentBuild.currentResult == "FAILURE" || currentBuild.currentResult == "UNSTABLE" ) {
        echo "---Current build result is:${currentBuild.currentResult}"
    }
    else {
        echo "---Current build result is:${currentBuild.currentResult}"
    }
}

For the steps after the if and else statements, you can also use the emailext method to send an email to the administrator with the build result. Refer to the declarative syntax for email sending configuration.

Deleting Workspace #

In some projects, it may be necessary to delete the current workspace after the build is completed in order to prevent caching. One simple way is to use a shell command to delete it directly. However, since we are using a pipeline, we can use the dir and deleteDir instructions to delete it.

In the example mentioned above, you can add a new stage after the “Push Image to Repository” step. Use the dir command to enter the directory and then use the deleteDir instruction to delete it. The specific code is as follows:

stage('Clean Workspace'){
    dir('$WORKSPACE') {
        deleteDir()
    }
}

This way, the current workspace will be deleted when this stage is executed.

Code Deployment #

The previous content introduced compiling and building operations for the code, but did not mention deployment operations. You can refer to the playbook written in the previous Ansible chapter to deploy your code with simple modifications. However, if you don’t know how to do it, don’t worry. I will write a dedicated chapter later on how to deploy.

This is the end of the basic practice of using pipeline syntax for continuous delivery and deployment. In the next section, I will introduce commonly used syntax for Docker pipeline.