10 Implementing Automation Engine With Jenkins Pipeline Scripted Syntax

10Implementing Automation Engine with Jenkins Pipeline Scripted Syntax #

In the previous chapters, we introduced the declarative syntax of the pipeline. In this section, we will introduce the usage of the script syntax through comparison. Although both types of syntax are based on the sub-system of the underlying pipeline, the script syntax is easier to use when using some commonly used plugins. The script syntax has only one instruction, stage, and cannot use the stages and steps keywords that are used in the declarative syntax. Let’s take a look at how to implement some of the keywords mentioned in the declarative pipeline in the script syntax.

node #

In the declarative pipeline, the agent keyword is used to specify the agent on which the Jenkins pipeline is executed. In the script pipeline, we use the node keyword to specify the agent node.

Here is a basic example:

node('jenkins-slave1') {
    stage('test1') {
        sh 'hostname'
    }
}

In this example, jenkins-slave1 is the name or label of the Jenkins agent node. If `() is empty, it will default to selecting a slave node from all available nodes.

In addition to providing a global agent node for all stages, you can also provide different agent nodes for different stages.

Here is an example:

node() {
    stage('test-node'){
        node('jenkins-slave1'){
            stage('test1'){
                sh 'hostname'
            }   
        }
    }
    stage('test-node2'){
        node('jenkins-slave169'){
            stage('test2'){
                sh 'hostname'
            }
        }
    }
}

This example uses different slave nodes for different stage steps.

You can also use containers as agents with the node keyword, like this:

node() {
    docker.image('maven').inside {
        stage('te') {
            sh "hostname"
        }
    }
    docker.image('nginx').inside {
        stage('te1') {
            sh "hostname"
        }
    }
}

In this configuration, different stage steps use different images to start containers as the execution environment for the scripts. docker.image().inside() is a method in the Docker Pipeline plugin used to specify an image and start a container. It will be explained in future chapters. For now, just understand that it can be used in this way.

Based on the example above, can we start containers on different nodes to perform different tasks? The answer is yes. Here’s an example:

node() {
    stage('test1'){
        node('slave1'){
            docker.image('maven').inside {
                stage('te') {
                    sh "hostname"
                }
            }
        }
    }
    stage('test2'){
        node('slave2'){
            docker.image('nginx').inside {
                stage('te1') {
                    sh "hostname"
                }
            }
        }
    }
}

This example starts containers as the execution environment for the pipeline on two different slave nodes.

If we can achieve this using the script syntax, can we also do it in the declarative syntax? The answer is yes, and it will be explained in future chapters.

tool #

In the declarative pipeline, the tools keyword is used to define the environment variables for the tools set in the Jenkins system. In the script pipeline, we use the def and tool keywords to use these tools. For example:

node{
    def maven = tool name: 'maven-3.5.4'
    env.PATH = "${maven}/bin:${env.PATH}"
    stage('test'){
        sh 'mvn --version'
    }
}

This syntax fragment can also be generated using snippet generators.

withEnv #

In the declarative pipeline, we define the environment variables used in the pipeline using environment. In the script pipeline, we use withEnv to define global or local environment variables.

For example, to define the Maven environment variable:

node() {
    withEnv(["PATH+MAVEN=${tool 'maven-3.5.4'}/bin"]) {
        sh 'mvn --version'
    }
}

This can also be written as:

def maven = 'maven-3.5.4'
withEnv(["PATH+MAVEN=${tool maven}/bin"]) {
    sh 'mvn --version'
}

You can also directly specify the actual value:

withEnv(['MYTOOL_HOME=/usr/local/']) {
    sh '$MYTOOL_HOME/bin/start'
}

parallel #

In the declarative pipeline, we can use parallel to execute multiple stages in parallel. parallel can be used not only with declarative syntax but also with script syntax.

The following example can be executed using script syntax:

node {
    stage('Test') {
        parallel slave1: {
            node('jenkins-slave1') {
                try {
                    sh 'hostname'
                }
                finally {
                    echo "test"
                }
            }
        },
        slave169: {
            node('jenkins-slave169') {
                sh 'hostname'
            }
        }
    }
}

This script executes shell commands on two nodes simultaneously.

Properties #

Declarative scripts define parameters in the pipeline using the parameters keyword, while script syntax uses the properties keyword to define parameters used within a stage. This can be generated using a snippet generator (refer to the example below).

Example:

properties([
    buildDiscarder(
        logRotator(
            artifactDaysToKeepStr: '',
            artifactNumToKeepStr: '',
            daysToKeepStr: '2',
            numToKeepStr: '4'
        )
    ),
    parameters([
        string(
            defaultValue: 'v1',
            description: '镜像tag',
            name: 'tag',
            trim: true
        )
    ])
])

This syntax can be placed either inside or outside of a node{} block.

Exception Handling #

When using Jenkins, it is inevitable to encounter job failures. In declarative syntax, you can handle failures using the post keyword, while in script syntax, you need to use a try/catch/finally block to handle failures. When a step fails, for any reason, it throws an exception.

Example:

node {
    stage('Example') {
        try {
            sh 'exit 1'
            currentBuild.result = 'SUCCESS'
        }
        catch (exc) {
            currentBuild.result = 'FAILURE'
            throw exc
        }
        finally {
            if(currentBuild.currentResult == "ABORTED" || currentBuild.currentResult == "FAILURE" || currentBuild.currentResult == "UNSTABLE") {
                echo "---currentBuild.currentResult\nresult is:${currentBuild.currentResult}"
            }
            else {
                echo "---currentBuild.currentResult\nresult is:${currentBuild.currentResult}"
            }
        }
    }
}

Note that try statements must be followed by catch or finally statements, and both catch and finally statements can be used together or separately.

Jenkinsfile #

After learning the two syntaxes of pipelines, you should have a basic understanding of how to write script-based and declarative pipeline scripts. In actual work, most Jenkinsfile scripts are written using declarative syntax (although script syntax is still possible). The choice of syntax depends on an individual’s familiarity with the two syntaxes. In some cases, such as when using plugins like the Docker plugin or Kubernetes plugin, script syntax can be easily integrated, while declarative syntax is more complex. Therefore, it is necessary to do some research before using pipelines.

The Jenkinsfile script file is usually placed in the source code repository. When configuring a Jenkins pipeline project, it can be pulled using the checkout script from SCM method, along with any other necessary files. Once pulled, the pipeline will automatically be executed.

For example, you can use the Pipeline Script method to define pipeline script usage:

node {
    checkout scm
}

Note:

  • checkout scm should be replaced with the actual command for fetching code from the repository, such as git clone, etc.
  • With this approach, the pipeline script in the Jenkinsfile does not need to be wrapped in a node{} block.

You can also use the Pipeline script from SCM method, as shown below:

Pipeline script from SCM

Note that the file name specified in the script path refers to the file in the root directory of the fetched code from the Git repository. Once the file is fetched locally, the pipeline will be automatically executed.

Now that you know how to use a Jenkinsfile, let’s take a look at some instructions and keywords that can be used with both pipeline syntaxes.

Variables #

The pipeline script supports the use of variables. Variables in the pipeline can take on various forms. You can define custom variables, use Jenkins environment variables, or dynamically set the result obtained from a command as a variable. Let’s introduce these variables one by one.

Custom Variables

Jenkins uses the same rules as Groovy for variable assignment. Groovy supports declaring a string using single or double quotes. For example:

def singlyQuoted = 'Hello'
def doublyQuoted = "World"

When referencing variables, you can use single or double quotes. If there are no special characters, single and double quotes are equivalent. If you need to execute a multi-line command, use three single or double quotes. If there are special characters that need to be interpreted, use double quotes. If there is a need to escape characters, use the backslash \ escape character.

All special characters between single quotes lose their special meaning. Most special characters between double quotes lose their special meaning, except for the following exceptional cases:

  • $ is used to extract the value of a variable
  • \ is used to escape characters

For example:

def username = 'Jenkins'
echo 'Hello Mr. ${username}'
echo "I said, Hello Mr. ${username}"

The result is:

Hello Mr. ${username}
I said, Hello Mr. Jenkins

Example of executing a command:

sh '''
    whoami
    pwd
    ls -ltra
'''

The result is:

+ whoami
root
+ pwd
/var/lib/jenkins/workspace/test-mytest
+ ls -ltra
total 4
drwxr-xr-x 2 root root    6 Mar 16 17:02 .
drwxr-xr-x 8 root root 4096 Mar 16 17:02 ..

As for defining multiple variables, if each variable is defined using the def keyword, it would be redundant. Groovy provides a solution for this. For the example above, it can be written like this:

def cc = [username:'jenkins', version:'v2.19.0']

Referencing the variables is also simple:

echo "$cc.username $cc.version"

If you want to use a variable defined using the def keyword in a scripted syntax, you need to wrap the step that defines the variable in a script{} block.

Environment Variables #

In Jenkins pipeline, you can define environment variables that are used throughout the entire pipeline. You can also use the built-in environment variables provided by Jenkins.

The method for defining environment variables depends on whether you are using declarative or scripted pipeline syntax. The methods for setting environment variables in declarative and scripted syntax have been mentioned earlier, so I won’t go into detail here. Let’s take a look at using the built-in environment variables in Jenkins.

Jenkins pipeline provides environment variables through the global variable env, which can be used anywhere in the Jenkinsfile. A complete list of environment variables accessible in a Jenkins pipeline is documented at ${YOUR_JENKINS_URL}/pipeline-syntax/globals#env.

For example:

  • BUILD_ID - The ID of the current build, which is equivalent to BUILD_NUMBER in earlier versions of Jenkins.

  • BUILD_NUMBER - The current build number, for example, “153”.

  • BUILD_TAG - A string jenkins-${JOB_NAME}-${BUILD_NUMBER} that can be used to identify files such as source code or jar files.

  • BUILD_URL - The URL that can be used to navigate to the results of this build (e.g., http://buildserver/jenkins/job/MyJobName/17/).

  • EXECUTOR_NUMBER - A unique identifier for the executor that is running the current build (among all executors on the same machine). This is the number you see in the “Build Executor Status”, starting from 0 instead of 1.

  • JAVA_HOME - If your job is configured with a specific JDK tool, this variable is set to the JAVA_HOME of that JDK. When this variable is set, the PATH will also include the bin subdirectory of JAVA_HOME.

  • JENKINS_URL - The complete URL of the Jenkins server, for example, https://example.com:port/jenkins/. (Note: This is only available if the Jenkins URL is set in the “System Configuration”.)

  • JOB_NAME - The name of the project for this build.

  • NODE_NAME - The name of the node on which this build is running. For the master node, it is “master”.

  • WORKSPACE - The absolute path of the workspace. It is also the path of the job.

For more information on variables, refer to the pipeline-syntax.

Here is an example:

pipeline {
    agent any
    stages {
        stage('Example') {
            steps {
                echo "Running ${env.BUILD_ID} on ${env.JENKINS_URL}"
            }
        }
    }
}

Dynamic Variable Setting #

Environment variables can be set at runtime and can be obtained using shell commands (Linux sh), Windows batch scripts (bat), and PowerShell scripts (powershell) to provide values to the stages below. Various scripts can return returnStatus or returnStdout.

Here’s an example of a declarative script using sh (shell), with both returnStatus and returnStdout:

pipeline {
    agent any 
    environment {
        // Using returnStdout
        CC = """
            ${sh(
                returnStdout: true,
                script: 'echo "clang"'
            ).trim()}
        """ 

        // Using returnStatus
        EXIT_STATUS = """
            ${sh(
                returnStatus: true,
                script: 'exit 1'
            )}
        """
    }
    stages {
        stage('Example') {
            environment {
                DEBUG_FLAGS = '-g'
            }
            steps {
                sh 'printenv'
            }
        }
    }
}

Note:

  • When using returnStdout, a space will be appended to the end of the returned string. You can use .trim() to remove it.
  • This directive can generate syntax snippets using the snippet generator.

Compared to the script syntax, it’s much simpler:

node {
    stage('s'){
        script {
            cc = sh(returnStdout: true, script: 'hostname').trim()
        }
        echo "$cc"
    }
}

Conditional Statements #

Jenkinsfile is executed serially from top to bottom, and during the execution, there are cases where conditional statements are used. In the declarative syntax, you can use the when keyword for some basic checks, but it cannot be used in the script syntax. The if/else statement can be used in both pipeline syntaxes and provides flexibility in the conditional checks.

Here’s an example of using the if/else statement in a pipeline:

node {
    stage('Example') {
        if (env.BRANCH_NAME == 'master') {
            echo 'I only execute on the master branch'
        } else {
            echo 'I execute elsewhere'
        }
    }
}

Or you can use it like this:

script {
    test_result = sh(script: "ls /tmp/uu.txt", returnStatus: true)
    echo test_result
    if (test_result == 0) {
        echo "file is exist"   
    } else if (test_resultt == 2) {
        echo "file is not exist"
    } else {
        error("command is error, please check")  
    }
}

fileExists #

This keyword is used to check whether the specified file exists in the current workspace. The file path is specified using a relative path based on the current workspace, and the result returned is a boolean value. The syntax for using this keyword can be generated using the snippet generator.

Here’s an example:

script {
    json_file = "${env.WORKSPACE}/testdata/test_json.json"
    if (fileExists(json_file) == true) {
        echo("json file exists")
    } else {
        error("Cannot find json file here")
    }
}

Note:

  • This example checks if the file specified by the json_file variable exists.
  • The error directive is used to define and return a custom error message.

dir #

The dir() method is used to change the current working directory. Simply specify the directory path to enter in the dir block.

Here’s an example:

stages {
    stage("dir") {
        steps {
            echo env.WORKSPACE
            dir("${env.WORKSPACE}/fw-base-nop") {
                sh "pwd"
            }
        }
    }
}

deleteDir #

The deleteDir() method deletes the files and folders under WORKSPACE recursively by default, and it takes no arguments. It is often used in conjunction with the dir step to delete the contents of a specified directory.

Here’s an example:

stage("deleteDir") {
    steps{
        script{
            sh("ls -al ${env.WORKSPACE}")
            deleteDir()  // clean up current work directory
            sh("ls -al ${env.WORKSPACE}")
        }
    }
}

Or delete a specified directory

node {
    stage('s'){
        dir('/base'){
            deleteDir()
        }

    }
}

script #

The script step needs to be inside a script-pipeline block and is used to execute commands in the pipeline. For most use cases, the script step is unnecessary in declarative pipelines as the keyword is used mostly to execute commands.

Here is an example:

pipeline {
    agent any
    stages {
        stage('Example') {
            steps {
                echo 'Hello World'

                script {
                    def browsers = ['chrome', 'firefox']
                    for (int i = 0; i < browsers.size(); ++i) {
                        echo "Testing the ${browsers[i]} browser"
                    }
                }
            }
        }
    }
}

Variables defined using def can be referenced in subsequent stages.

stash/unstash #

stash is used to save files for other stages or steps in the same build to use. If the entire pipeline is executed on the same machine, stash is unnecessary. stash is generally used when files need to be passed between Jenkins nodes. The stash step stores files in a tar file, and stash operations for large files will consume computing resources from the Jenkins master. According to the Jenkins documentation, it is recommended to use alternative solutions when the file size is between 5 and 100MB.

The stash step has the following parameters:

  • name: A string that serves as a unique identifier for the collection of files to be saved.
  • allowEmpty: A boolean that allows an empty stash.
  • excludes: A string that excludes files. Use a comma to separate multiple files.
  • includes: A string that includes files to stash. Leave it blank to stash all files.
  • useDefaultExcludes: A boolean that, when set to true, uses Ant-style path default excluded files

Except for the name parameter, all other parameters are optional. excludes and includes use Ant-style path expressions.

The counterpart of stash is unstash. This step is used to retrieve files that were previously stashed. The unstash step only has one name parameter, which is the unique identifier used when stashing the files. Normally, the stash and unstash steps are used together. Here is an example:

pipeline {
    agent none
    stages {
        stage('stash') {
            agent { label "master" }
            steps {
                dir('target') {
                    stash(name: "abc", include: "xx.jar")
                }
            }
        }
        stage("unstash") {
            agent { label "jenkins-slave1" }
            steps {
                script {
                    unstash("abc")
                    sh "cp xx.jar /data"
                    ...
                }
            }
        }
    }
}

The unstash step retrieves files based on the unique identifier specified in the stash step. After the pipeline has finished executing, the files will be deleted.

archiveArtifacts #

The archiveArtifacts instruction in Jenkins pipeline is used to save artifacts. Unlike stashing, the archiveArtifacts instruction saves the files locally, and they are not destroyed after the pipeline has finished executing. The saved files will be placed in the _Jenkins_ jobs/JOB_NAME/builds/BUILD_NO directory.

Here is an example:

pipeline {
    agent any
    stages {
        stage('Archive') {
            steps {
                archiveArtifacts artifacts: '**/target/*.jar', onlyIfSuccessful: true 
            }
        }
    }
}

The archiveArtifacts instruction has multiple parameters. The artifacts parameter is required, and the onlyIfSuccessful parameter is used to indicate whether the files should be saved only if the build is successful. Refer to the official documentation for other parameters.

Snippet Generator #

The Jenkins built-in Snippet Generator tool is used to dynamically generate step syntax for plugins or keywords. The Snippet Generator interface can be accessed through the pipeline syntax link at the bottom of the pipeline project or directly through ${YOUR_JENKINS_URL}/pipeline-syntax.

Jenkins provides two types of Snippet Generators: one for generating snippets based on declarative pipeline syntax (Declarative Directive Generator menu), and another for generating snippets based on Jenkins built-in plugins and script syntax (Snippet Generator menu). Let’s go through both types:

The Declarative Directive Generator is used to generate syntax snippets when you do not know the syntax for a specific keyword. Click on the Declarative Directive Generator menu in the link mentioned above, and you will see a dropdown displaying the available directives. Once you select a directive and configure it, click on Generate Declarative Directive to generate the corresponding syntax snippet.

For plugins and script syntax using declarative and script steps, you can directly click on the Snippet Generator menu. Clicking on Sample Step will show a list of available plugins. Selecting a plugin, as shown in the screenshot below, will allow you to configure the required fields. Once configured, click on Generate Pipeline Script to generate the code snippet.

You can then copy and paste the generated snippet into your pipeline script.

This concludes the introduction to Jenkins script syntax. In the next section, we will practice what we have learned in this and the previous section through practical examples to enhance our understanding.