09 Implementing Automation Engine With Jenkins Pipeline Declarative Syntax

09Implementing Automation Engine with Jenkins Pipeline Declarative Syntax #

Introduction #

Jenkins pipeline is a set of plugins provided by Jenkins 2.x or above, which is another powerful tool for implementing the Jenkins automation engine. It supports the implementation and integration of continuous delivery pipelines into Jenkins. Definitions of Jenkins pipelines can be entered directly into the Jenkins classic UI using basic pipeline script; or they can be written in a text file (Jenkinsfile), which can be submitted to the root directory of the project’s source code repository, and the pipeline project can be configured to download the Jenkinsfile and execute it automatically.

Pipeline scripts can be written using two syntaxes: declarative and scripted. Declarative and scripted pipelines are fundamentally different, but both are built on top of the underlying pipeline subsystem of Jenkins. Using declarative pipelines allows you to leverage the better features of Jenkins pipelines:

  • It provides richer syntax features compared to the scripted pipeline syntax.
  • Writing and reading code in declarative pipelines is easier and maintenance is simpler compared to the scripted pipeline syntax.

Scripted pipelines provide Jenkins users with a lot of flexibility and extensibility. However, the Groovy learning curve is not suitable for all team members; therefore, the declarative syntax was created to provide a simpler and more structured syntax for writing Jenkinsfiles. Declarative pipelines encourage a declarative programming model, while scripted pipelines follow a more command-oriented programming model. The main difference between them is the syntax and flexibility.

Although they are both based on “pipeline as code,” creating a Jenkinsfile and submitting it to the source code repository provides some benefits:

  • Automatically create pipeline builds and pull requests for all branches.
  • Code review/iteration on the pipeline (as well as the rest of the source code).
  • Audit tracking of the pipeline.
  • The true source code of the pipeline can be viewed and edited by multiple members of the project.

Next, let’s briefly introduce these two pipeline syntax templates.

Scripted Pipeline #

In the Scripted Pipeline syntax, one or more node blocks are used to specify the agent machine to execute the core tasks of the pipeline. It selects the agent machine based on the labels configured within the Jenkins system, although this configuration is not necessary.

# If no slave machine is specified, the task is executed on the master machine by default
node {  
    
    stage('Build') { 
        
    }
    stage('Test') { 
        
    }
    stage('Deploy') { 
        
    }
}

Explanation

  • node is a specific syntax in Scripted Pipeline that instructs Jenkins to execute the pipeline (including any stages within it) on any available agent/node.

  • stage('Build') defines the “Build” stage. The stage block is optional in Scripted Pipeline syntax, but using a stage block in a script in a pipeline can clearly display the subset of tasks for each stage when executed. Therefore, using stage blocks is recommended.

  • The string inside the parentheses (Build) in the above stage is customizable and does not have to be filled in according to the official example. It can be chosen for easy distinction.

Declarative Pipeline #

Here is an example:

pipeline 
{ 
    agent any 
    stages 
    {
        stage('Build') 
        { 
            steps 
            { 
                sh 'pwd' 
            }
        }
        stage('Test')
        {
            steps 
            {
                sh 'make check'
                junit 'reports/**/*.xml' 
            }
        }
        stage('Deploy') 
        {
            steps 
            {
                sh 'deploy'
            }
        }
    }
}

Explanation

  • pipeline {} is the specific syntax for a Declarative Pipeline, which defines everything needed to execute the entire pipeline.
  • agent is a required keyword in a Declarative Pipeline that specifies which executor or agent node will execute the pipeline.
  • stage{} is a block of code that describes the task to be executed. A stage block contains a steps block.
  • steps{} is a specific syntax in a Declarative Pipeline that describes the steps to be run within this stage.
  • sh is used to execute shell commands.

From the above simple example, it can be observed that Declarative Pipelines enforce stricter and predefined structures, making them an ideal choice for simpler continuous delivery pipelines. On the other hand, Scripted Pipelines provide fewer restrictions, with the only limitations often being defined by the Groovy subset itself and not any specific pipeline system. This makes Scripted Pipelines an ideal choice for power users and those with more complex requirements. However, it will be found in subsequent usage that using Scripted Pipelines with certain plugins can be more flexible and user-friendly than Declarative Pipelines.

Basic Concepts #

The pipeline code defines the entire delivery and deployment process. During usage, a pipeline typically includes steps such as testing, building, and delivering/deploying applications. A complete pipeline script consists of a series of pipeline blocks. The main components of these blocks are as follows:

- node/agent (Node) #

The keywords node and agent are used to specify the Jenkins agent node, which is a part of the Jenkins environment and can be used to execute pipelines. Node is a crucial part of the scripted pipeline syntax, while agent is a crucial part of the declarative pipeline syntax.

- stages/stage #

The stages block defines different subsets of tasks to be executed in the entire pipeline, such as the “Build”, “Test”, and “Deploy” stages. Each subset is typically a code block. In the future, you may encounter stages, and a stages block can contain multiple stages.

- Steps #

A single task that tells Jenkins what to do at a specific stage. It is usually placed under a stage and consists of a piece of code or a module.

Now that we understand the basic concepts, let’s have a brief introduction to the two types of pipeline syntax mentioned earlier.

Declarative Pipeline Syntax #

All valid declarative pipeline code must be enclosed within a pipeline block, like this:

pipeline {
    ....
}

In declarative pipelines, valid basic statements and expressions follow the same rules as Groovy syntax, with the following exceptions:

  • The top-level of the pipeline must be a block: pipeline {}.
  • Semicolons are not used as statement separators; each statement must be on its own line.

agent #

The agent command is used to specify the slave node or other machine on which the pipeline script should be executed. It is a declarative pipeline directive for setting up agent nodes. The agent block can be defined at the top level of the pipeline block or within a stage block.

Parameters

To accommodate various possible agent nodes, the agent block supports different types of parameters. These parameters can be applied at the top level of the pipeline block or within the stage directive.

any #

Execute the pipeline or stage on any available agent. For example: agent any.

none #

When no global agent proxy is configured at the top of the pipeline block, this keyword will be applied to the entire pipeline, and each stage section needs to define its own agent section. If an agent is specified as non-none and an agent is also defined in the stage, the one defined in the stage will be used by default.

label #

The label keyword can be used to specify a Jenkins node that has been set with a label. For example: agent { label 'jenkins-slave1' }, where jenkins-slave1 is a Jenkins slave node added in the system and labeled as jenkins-slave1.

node #

The effects of agent { node { label 'labelName' } } and agent { label 'labelName' } are the same, but node allows for additional options (such as customWorkspace, which will be discussed later).

Docker #

Start a container using the given image as the environment for global or stage execution. The image will generate a container dynamically on the specified node or the node specified using the label keyword to perform pipeline operations. It can also be understood as a dynamic slave node for Jenkins. For example: agent { docker 'maven:3-alpine' }.

The docker keyword can also add the args parameter, which may include parameters passed directly to docker run when called, such as mounting directories.

agent {
    docker {
        image 'maven:3-alpine'
        label 'my-defined-label'
        args  '-v /tmp:/tmp'
    }
}

Dockerfile #

In addition to starting a container directly using a Docker image, you can also build a container using a Dockerfile. Typically, the Dockerfile is placed in the root directory of the source code repository: agent { dockerfile true }.

If the Dockerfile is not in the root directory, you can also build it in another directory by using the dir parameter:

agent { dockerfile {dir 'someSubDir' } }

If the Dockerfile has a different name, you can specify the filename using the filename option.

If you need to pass parameters to the Dockerfile during the build process, you can use the additionalBuildArgs option. For example:

agent { dockerfile {additionalBuildArgs '--build-arg foo=bar' } }

Combining all of the above, the Docker command docker build -f Dockerfile.build --build-arg version=1.0.2 ./build/ can be represented in pipeline code as:

agent {
    dockerfile {
        filename 'Dockerfile.build'
        dir 'build'
        label 'my-defined-label'
        additionalBuildArgs  '--build-arg version=1.0.2'
    }
}

Kubernetes #

This keyword is used to dynamically start a pod as a Jenkins slave node to perform pipeline operations within a Kubernetes cluster. After the pipeline execution is complete, the pod is automatically destroyed. An example provided by the official website is as follows:

agent {
  kubernetes {
    label podlabel
    yaml """
kind: Pod
metadata:
  name: jenkins-slave
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: Always
    command:
    - /busybox/cat
    tty: true
    volumeMounts:
      - name: aws-secret
        mountPath: /root/.aws/
      - name: docker-registry-config
        mountPath: /kaniko/.docker
restartPolicy: Never
volumes:
  - name: aws-secret
    secret:
      secretName: aws-secret
  - name: docker-registry-config
    configMap:
      name: docker-registry-config
"""
  }

It may look a bit complicated and messy, but when using the kubernetes agent in practice, it can be much simpler if script-like syntax is used.

Some common options for the agent

There are some keywords that can be applied to pipelines with multiple agents.

label #

This option is used for the global pipeline or individual stage. It is available for node, docker, and dockerfile, but when using the node parameter, the label needs to be used.

customWorkspace #

The customWorkspace option allows you to define a custom workspace to be used by all pipelines or specific stages within the agent. The custom workspace exists in the node workspace root directory and can be specified as either a relative or absolute path. For example:

  agent {
      node {
          label 'my-defined-label'
          customWorkspace '/some/other/path'
      }
  }

This option is useful for node, docker, and dockerfile.

Examples

Building a service on a specified slave node:

pipeline {
    agent none 
    stages {
        stage('Build') {
            agent { node { label 'master' } } 
            steps {
                sh 'hostname'
            }
        }
        stage('Test') {
            agent { node { label 'slave1' } } 
            steps {
                sh 'hostname'
            }
        }
    }
}

Starting a container using a given image (maven:3-alpine) and executing all steps defined in the pipeline:

pipeline {
    agent { docker 'maven:3-alpine' } 
    stages {
        stage('build') {
            steps {
                sh 'mvn clean install'
            }
        }
    }
}

Starting a container based on a given image in a specified stage and executing the steps defined in the pipeline:

pipeline {
    agent none 
    stages {
        stage('Build') {
            agent { docker 'maven:3-alpine' } 
            steps {
                echo 'Hello, Maven'
                sh 'mvn --version'
            }
        }
        stage('Test') {
            agent { docker 'openjdk:8-jre' } 
            steps {
                echo 'Hello, JDK'
                sh 'java -version'
            }
        }
    }
}

Note

Defining agent none at the top level of the pipeline forces each stage to define its own agent section.

stages #

Contains one or more stage directives, and all the work done in the pipeline will be encapsulated within one or more stage directives. It is recommended that the stages section contain at least one stage directive, as shown below:

pipeline {
    agent any
    stages { 
        stage('Example') {
            steps {
                echo 'Hello World'
            }
        }
    }
}

It should be noted that the stages directive is specific to the declarative script, including the steps mentioned below, which are also specific to the declarative script. In the script-based pipeline, only the stage directive is used.

Steps #

The steps section defines one or more steps (code) within the given stage. It contains a complete list of steps, and within each step, you can define environment variables, execute scripts, and perform other operations. Here’s an example:

pipeline {
    agent any
    stages {
        stage('Example') {
            steps { 
                echo 'Hello World'
            }
            steps { 
                script {
                    sh 'hostname'
                }
            }
        }
    }
}

It’s worth noting that in declarative syntax, the steps block is required.

options #

The options directive is used to configure specific options for either the global pipeline or a specific stage within the pipeline. The pipeline provides many such options, such as buildDiscarder, which can be used to set the number of build records to keep and the duration to keep them for a job; this option can use both built-in parameters and parameters provided by plugins, such as podTemplate.

Global Options #

buildDiscarder

Used to set the number of builds history displayed in the UI and the number of days to keep. For example: options { buildDiscarder(logRotator(numToKeepStr: '1')) }.

disableConcurrentBuilds

Prevents concurrent execution of the pipeline. Can be used to prevent simultaneous access to shared resources, etc. For example: options { disableConcurrentBuilds() }.

skipDefaultCheckout

Skips the step of checking out code from the source code repository in the agent directive. For example: options { skipDefaultCheckout() }.

skipStagesAfterUnstable

Skips the stages if the build status becomes unstable. For example: options { skipStagesAfterUnstable() }.

checkoutToSubdirectory

Saves the code to a subdirectory of the workspace. For example: options { checkoutToSubdirectory('foo') }.

timeout

Sets the timeout for the pipeline run. If the timeout is exceeded, Jenkins will abort the pipeline. For example: options { timeout(time: 1, unit: 'HOURS') }.

retry

Specifies the number of times to retry the whole pipeline in case of failure. For example: options { retry(3) }.

timestamps

Displays the current time when each step of the pipeline is executed in the console output. For example: options { timestamps() }.

Example:

pipeline {
    agent any
    options {
        timeout(time: 1, unit: 'HOURS') 
    }
    stages {
        stage('Example') {
            steps {
                echo 'Hello World'
            }
        }
    }
}

This example sets the timeout for the pipeline execution.

stage #

The options directive can be used to define stage-level options parameters in addition to defining global options parameters. The options at the stage level may only include options such as retry, timeout, or timestamps, or declarative options related to the stage, such as skipDefaultCheckout.

The parameters defined in the options directive within the stage are called before entering the agent or checked when the when condition occurs.

Optional stage options

skipDefaultCheckout

Skip default checkout of source code from the source code repository in the agent directive. For example: options { skipDefaultCheckout() }.

timeout

Sets the timeout for this stage. If it exceeds the specified time, Jenkins will terminate the stage. For example: options { timeout(time: 1, unit: 'HOURS') }.

retry

Retry the stage execution in case of job failure. For example: options { retry(3) }.

timestamps

Used to display the current time when each step in the pipeline is executed in the console output. For example: options { timestamps() }.

Example

pipeline {
    agent any
    stages {
        stage('Example') {
            options {
                timeout(time: 1, unit: 'HOURS') 
            }
            steps {
                echo 'Hello World'
            }
        }
    }
}

environment #

The environment directive is used to define one or more environment variables that consist of key-value pairs (key=value) to be used by all stages. Whether the environment variables are global or specific to a particular stage depends on the position where the environment directive is defined within the pipeline.

Here is an example:

pipeline {
    agent any
    environment {
        unit_test = 'true'
        version = "v1"
        JAVA_HOME='/data/jdk'
    }
    stages {
        stage('Example') {
            steps('test1') {
                script {
                    if (unit_test) {
                       echo version
                    }
                }
                echo "$JAVA_HOME"
            }
        }
    }
}

This example defines three global environment variables.

The environment directive can also reference credentials previously created in Jenkins using the credentials() method. The supported types for this method are as follows:

Secret Text

The specified environment variable will be set to the secret text content.

Secret File

The specified environment variable will be set to a temporarily created file.

Username and password - The specified environment variable will be set to username:password, and two additional environment variables will be automatically defined: MYVARNAME_USR and MYVARNAME_PSW.

SSH with Private Key - The specified environment variable will be set to a temporarily created SSH key file, and two additional environment variables may be automatically defined: MYVARNAME_USR and MYVARNAME_PSW.

Here is an example:

pipeline {
    agent any
    environment {
        CC = 'clang'
    }
    stages {
        stage('test-secret') {
            environment {
                SERVICE_CREDS = credentials('3d0e85b7-30cd-451d-bf9e-e6d87f6c9686')
                SSH_CREDS = credentials('160-ssh')
            }
            steps {
                sh 'printenv'
                sh 'echo "Service user is $SERVICE_CREDS_USR"'
                sh 'echo "Service password is $SERVICE_CREDS_PSW"'
                sh 'echo "SSH user is $SSH_CREDS_USR"'
                sh 'echo "SSH passphrase is $SSH_CREDS_PSW"'
            }
        }
        stage('test-usrename-password') {
            environment {
                SERVICE_CREDS = credentials('auth_harbor')
                SSH_CREDS = credentials('160-ssh')
            }
            steps {
                sh 'echo "Service user is $SERVICE_CREDS_USR"'
                sh 'echo "Service password is $SERVICE_CREDS_PSW"'
                sh 'echo "SSH user is $SSH_CREDS_USR"'
                sh 'echo "SSH passphrase is $SSH_CREDS_PSW"'
            }
        }
    }
}

parameters #

The parameters directive provides a list of parameters that should be provided when triggering a pipeline. The values for these specified parameters can be provided to the pipeline steps through the params object. The available parameter types are as follows:

string

A parameter of type string, for example: parameters { string(name: 'DEPLOY_ENV', defaultValue: 'staging', description: '') }

text

A parameter of type text, which can contain multiple lines of content, for example: parameters { text(name: 'DEPLOY_TEXT', defaultValue: 'One\nTwo\nThree\n', description: '') }

booleanParam

A parameter of type boolean, for example: parameters { booleanParam(name: 'DEBUG_BUILD', defaultValue: true, description: '') }

choice

A parameter of type choice, for example: parameters { choice(name: 'CHOICES', choices: ['one', 'two', 'three'], description: '') }

password

A parameter of type password, for example: parameters { password(name: 'PASSWORD', defaultValue: 'SECRET', description: 'A secret password') }

Here is an example:

pipeline {
    agent any
    parameters {
        string(name: 'version', defaultValue: 'latest', description: 'the image version')
        text(name: 'BIOGRAPHY', defaultValue: '', description: 'Enter some information about the person')
        booleanParam(name: 'test', defaultValue: true, description: 'Toggle this value')
        choice(name: 'build_number', choices: ['One', 'Two', 'Three'], description: 'the build_number')
        password(name: 'PASSWORD', defaultValue: 'mypassword', description: 'Enter a password')
    }
    stages {
        stage('print') {
            steps {
                echo " ${params.version}"
                echo " ${params.BIOGRAPHY}"
                echo "${params.test}"
                echo "${params.build_number}"
                echo "${params.PASSWORD}"
            }
        }
    }
}

Notes:

The name is the name of the parameter defined, and it is referenced using the name when using the parameter.

The parameters syntax snippet can be generated in the snippet generator, as shown below:

After selecting the parameter type and setting the parameters, simply click on Generate Declarative directive.

triggers #

The triggers directive defines the automation methods for triggering a pipeline. For pipelines integrated with source code (where the pipeline script is placed in the source code repository), the triggers directive may not be required. The currently available triggers are cron, pollSCM, and upstream.

cron - It accepts a crontab format (Linux crontab) string to define the interval at which the pipeline should be triggered.

Here’s an example:

pipeline {
    agent any
    triggers {
        cron('0 */4 * * 1-5')
    }
    stages {
        stage('test') {
            steps {
                echo 'Hello World'
            }
        }
    }
}

This pipeline will be triggered periodically.

pollSCM

It accepts a cron-style string to define a fixed interval at which Jenkins will check for new source code updates. If there are changes in the source code, the pipeline will be triggered again.

For example:

pipeline{
    agent any
    triggers{
        pollSCM('5 10 * * *')
    }
    stages{
        stage('test-code'){
            steps{
                echo "Periodically checking for code testing"
            }
        }
    }
}

upstream

It accepts comma-separated job names. The pipeline will be triggered again when any job in the string reaches the minimum threshold.

For example:

pipeline{
    agent any
    triggers{
        upstream(upstreamProjects: 'job1,job2', threshold: hudson.model.Result.SUCCESS)
    }
    stages{
        stage('test-upstream'){
            steps{
                echo "Trigger this pipeline when job1 or job2 succeeds"
            }
        }
    }
}

The hudson.model.Result can include the following attributes:

  • ABORTED: The task was manually aborted
  • FAILURE: The build failed
  • SUCCESS: The build succeeded
  • UNSTABLE: There were some errors, but the build didn’t fail
  • NOT_BUILT: In multi-stage builds, a previous stage issue prevented subsequent stages from executing.

tools #

The tools directive is used to define tools that are either automatically installed or already configured in the Jenkins system. If agent none is specified, this operation becomes ignored.

Supported tools include Maven, JDK, Gradle, and so on. An example is shown below:

pipeline {
    agent any
    tools {
        maven 'apache-maven-3.5'
        jdk   'jdk1.8'
    }
    stages {
        stage('Example') {
            steps {
                sh 'mvn --version'
            }
        }
    }
}

Please note that apache-maven-3.5 is the name of the Maven tool defined in Global Tool Configuration.

when #

The when directive allows the pipeline to determine whether a stage should be executed based on the given conditions. The when directive must contain at least one condition. If the when directive contains multiple conditions, all sub-conditions must return True for the stage to be executed. This is similar to nesting sub-conditions under the allOf condition.

The when directive includes several built-in conditions. More complex condition structures can be constructed using nested conditions such as not, allOf, or anyOf. The built-in conditions are as follows:

branch

Execute this stage when the branch being built matches the given branch. For example:

when { branch 'master' }.

Note that this only applies to multi-branch pipelines.

pipeline {
    agent any
    stages {
        stage('Example Deploy') {
            when {
                branch 'production'
            }
            steps {
                echo 'Deploying'
            }
        }
    }
}

Environment #

When the specified environment variable has a given value, execute these steps. For example:

when { environment name: 'DEPLOY_TO', value: 'production' }

pipeline {
    agent any
    stages {
        stage('Example Deploy') {
            when {
                branch 'production'
                environment name: 'DEPLOY_TO', value: 'production'
            }
            steps {
                echo 'Deploying'
            }
        }
    }
}

In this example, the pipeline will only execute the Example Deploy stage if the branch is production and the environment variable DEPLOY_TO is set to production.

expression #

When the specified Groovy expression evaluates to true, execute this stage. For example:

when { expression { return params.DEBUG_BUILD } }

  pipeline {
      agent any
      stages {
          stage('Example Deploy') {
              when {
                  expression { BRANCH_NAME ==~ /(production|staging)/ }
              }
              steps {
                  echo 'Deploying'
              }
          }
      }
  }

not #

When the nested condition is false, this stage is executed and must include a condition, for example:

when { not { branch 'master' } }

- allOf #

Execute this stage when all nested conditions are met. At least one condition must be included. For example:

when { allOf { branch 'master'; environment name: 'DEPLOY_TO', value: 'production' } }

pipeline {
    agent any
    stages {
        stage('Example Deploy') {
            when {
                allOf {
                    branch 'production'
                    environment name: 'DEPLOY_TO', value: 'production'
                }
            }
            steps {
                echo 'Deploying'
            }
        }
    }
}

anyOf #

Execute this stage when at least one nested condition is true. Must include at least one condition, for example:

when { anyOf { branch 'master'; branch 'staging' } }

  pipeline {
      agent any
      stages {
          stage('Example Deploy') {
              when {
                  branch 'production'
                  anyOf {
                      environment name: 'DEPLOY_TO', value: 'production'
                      environment name: 'DEPLOY_TO', value: 'staging'
                  }
              }
              steps {
                  echo 'Deploying'
              }
          }
      }
  }

equals

Execute this stage when the expected value is equal to the actual value. For example:

when { equals expected: 2, actual: currentBuild.number },

when { equals expected: 2, actual: currentBuild.number }

tag

Execute this stage if the TAG_NAME variable matches the given value. For example:

when { tag "release-*" }.

If an empty value is provided, the stage will be executed when the TAG_NAME variable exists (same as buildingTag()).

beforeagent

By default, if an agent is defined for a stage, the when condition of that stage will be evaluated after entering the agent of that stage. However, in the when block, the beforeAgent parameter can be used to change this condition. If beforeAgent is set to true, the when condition will be evaluated first, and the stage will only enter the agent if the when condition is true.

pipeline {
    agent none
    stages {
        stage('deploy-pro') {
            agent {
                label "some-label"
            }
            when {
                beforeAgent true
                branch 'master'
            }
            steps {
                echo 'Deploying'
            }
        }
    }
}

This example means that the deploy-pro stage will only be entered when the branch is master, thus avoiding entering the agent to fetch the code, improving the efficiency of the pipeline execution.

Parallel #

Declarative pipeline stages can declare multiple nested stages inside them, and under certain conditions, they can be executed in parallel.

Note, a stage must have only one steps or parallel stage. The nested stage itself cannot contain further parallel stages, and any stage containing parallel cannot include agent or tools as they do not have related steps.

Additionally, by adding failFast true to the stage containing parallel, when one of the processes fails, all the parallel stages will be forcibly terminated.

Here’s an example:

pipeline {
    agent any
    stages {
        stage('Non-Parallel Stage') {
            steps {
                echo 'This stage will be executed first.'
            }
        }
        stage('Parallel Stage') {
            when {
                branch 'master'
            }
            failFast true
            parallel {
                stage('Branch A') {
                    agent {
                        label "for-branch-a"
                    }
                    steps {
                        echo "On Branch A"
                    }
                }
                stage('Branch B') {
                    agent {
                        label "for-branch-b"
                    }
                    steps {
                        echo "On Branch B"
                    }
                }
                stage('Branch C') {
                    agent {
                        label "for-branch-c"
                    }
                    stages {
                        stage('Nested 1') {
                            steps {
                                echo "In stage Nested 1 within Branch C"
                            }
                        }
                        stage('Nested 2') {
                            steps {
                                echo "In stage Nested 2 within Branch C"
                            }
                        }
                    }
                }
            }
        }
    }
}

Apart from the failFast true method shown above, you can also use the parallelsAlwaysFailFast() method.

pipeline {
    agent any
    options {
        parallelsAlwaysFailFast()
    }
    stages {
        stage('Non-Parallel Stage') {
            steps {
                echo 'This stage will be executed first.'
            }
        }
        stage('Parallel Stage') {
            when {
                branch 'master'
            }
            parallel {
                stage('Branch A') {
                    ......
                }
                stage('Branch B') {
                    ......
                }
                stage('Branch C') {
                    agent {
                        label "for-branch-c"
                    }
                    stages {
                        stage('Nested 1') {
                            steps {
                                echo "....."
                            }
                        }
                        stage('Nested 2') {
                            steps {
                                echo "...."
                            }
                        }
                    }
                }
            }
        }
    }
}

post #

The post directive can define one or multiple steps, which are executed based on the completion status of the pipeline. The conditions supported by post are as follows:

always

Regardless of the completion status of the pipeline or stage, the steps defined in the post section will be executed.

changed

The steps defined in the post section will only be executed if the completion status of the current pipeline or stage is different from its previous run.

failure

The steps defined in the post section will only be executed if the completion status of the current pipeline or stage is “failure”. Usually shown as red in the web UI.

success

The steps defined in the post section will only be executed if the completion status of the current pipeline or stage is “success”. Usually shown as blue or green in the web UI.

unstable

The steps defined in the post section will only be executed if the completion status of the current pipeline or stage is “unstable”. This is usually caused by test failures, connection failures when copying files to remote servers, etc. Usually shown as yellow in the web UI.

aborted

The steps defined in the post section will only be executed if the completion status of the current pipeline or stage is “aborted”. This usually happens when the pipeline is manually terminated. Usually shown as gray in the web UI.

Now that we have understood the conditions for post execution, let’s look at an example:

pipeline {
    agent any
    stages {
        stage('test') {
            steps {
                echo 'Hello World'
            }
        }
    }
    post { 
        always { 
            echo 'I will always say Hello again!'
        }

    }
}

This example means that the post section defined will be executed regardless of the execution results of the stages.

You can also add conditions based on the execution results, for example:

post {
    always {
        script{
            if (currentBuild.currentResult == "ABORTED" || currentBuild.currentResult == "FAILURE" || currentBuild.currentResult == "UNSTABLE" ){
                emailext(
                subject: "Jenkins build is ${currentBuild.result}: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
                mimeType: "text/html",
                body: """<p>Jenkins build is ${currentBuild.result}: ${env.JOB_NAME} #${env.BUILD_NUMBER}:</p>
                         <p>Check console output at <a href="${env.BUILD_URL}console">${env.JOB_NAME} #${env.BUILD_NUMBER}</a></p>""",
                recipientProviders: [[$class: 'CulpritsRecipientProvider'],
                                     [$class: 'DevelopersRecipientProvider'],
                                     [$class: 'RequesterRecipientProvider']]
                )
            } 
        }  
    }
}

This example sends an email to the specified recipients if the pipeline execution fails or is forcibly terminated.

That’s all for the introduction to declarative syntax. In the next section, we will introduce the relevant content of script syntax through comparison.