12 Jenkins Docker Pipeline Plugin Dynamic Generation of Slave Nodes Syntax Analysis

12Jenkins Docker Pipeline Plugin Dynamic Generation of Slave Nodes Syntax Analysis #

More and more companies and users are introducing Jenkins into their actual work. In the process of using Jenkins for automated build and deployment, the use of dynamic containers as the execution environment for build and deployment tasks is gaining increasing adoption. This not only improves resource utilization, but also lowers configuration and maintenance costs.

In the section about pipeline syntax, it was mentioned that Docker containers can be used as the execution environment for pipelines when using the Jenkins agent. For build steps that can run on Linux, they can also be run in containers. Each pipeline project only needs to select an image that contains the necessary tools and libraries. Besides operating containers, the purpose of the Docker plugin is to start containers using specified images as the execution environment for individual stages or the entire pipeline.

To facilitate learning, the two types of syntax will be introduced separately.

Scripted Syntax #

To begin with, let’s introduce the usage of the Docker pipeline plugin, as both declarative and scripted syntax revolve around this plugin for launching agent nodes. This plugin is relatively simple to use in scripted pipelines, so let’s explain the basic syntax.

After installing the docker-pipeline plugin, a series of variables will be generated. To obtain detailed information on how to use these variables, you can view them by clicking on “Pipeline Syntax” in any pipeline job and then clicking on the “Global Variable Reference” menu in the redirected page.

Shown below -

For more variable contents, you can refer to the Jenkins pipeline project. Below is a brief introduction to some frequently used variables and methods.

image #

The image method is used to define an image object based on the image name or image ID, and then perform different operations based on the several properties under this method.

For example, define an object based on the image name and start a container using the inside property:

node {
     docker.image('maven:3.3.3-jdk-8').inside {
        git '…your-sources…'
        sh 'mvn -B clean install'
     }
}

Explanation

  • The image method pulls the image from Docker Hub based on the image name and defines the object. The container is started using the inside property. By default, the container will run on the same host as Jenkins. If you want to run it on a different node, you can specify the node name within the node block, as shown below:

    node('jenkins-slave1') {
        docker.image('maven:3.3.3-jdk-8').inside('-v $HOME/:/.m2/') {
            git '…your-sources…'
            sh 'mvn -B clean install'
        }
    }
    

    In this example, it will look for a label or name in Jenkins system configuration that matches jenkins-slave1, pull the image and start the container on that node.

    • The inside property is used to start the container and add parameters at launch. This property can be used without any parameters. In the example above, a volume mount parameter is added.
    • Images can also be pulled from private repositories. Additionally, it is also possible to start a container using the image ID, although this method is not commonly used.

Apart from using the inside property to start a container, you can also use the object defined by the image method by assigning it to a variable, like this:

node('jenkins-slave1') {
  def maven = docker.image('maven:latest')
  maven.pull()
  maven.inside {
    sh 'hostname'
  }
}

Explanation

  • The pull method is used to ensure that the image is the latest version in the image repository.
  • The inside property is used to start the container.

build #

In the previous chapter on declarative syntax in pipelines, it was introduced that you can use Docker and Dockerfile to start a container. In the scripted syntax, besides starting a container based on the image name or image ID, you can also build and start a container using Dockerfile. To build a Docker image, the Docker pipeline plugin provides a build() method to build a new image based on the Dockerfile provided in the source code repository before running the pipeline.

You can build an image using the syntax docker.build("my-image-name"). The advantage of using a defined variable is that you can use other properties and methods of that variable below.

For example:

node {
    checkout scm
    def customImage = docker.build("my-image:${env.BUILD_ID}")
    customImage.inside {
        sh 'make test'
    }
}

By default, the build() method builds the image based on the Dockerfile in the current directory of the fetched source code.

The build() method can also take a directory path that contains the Dockerfile as the second parameter to build the image in the specified directory.

For example:

node {
    checkout scm
    def testImage = docker.build("test-image", "./dockerfiles/test") 

    testImage.inside {
        sh 'make test'
    }
}

Note:

  • The test-image is built based on the Dockerfile found in the ./dockerfiles/test/ directory.
  • checkout scm retrieves the Dockerfile from the source code repository. It can be placed separately or together with the application code. Note that this configuration can be replaced by the git clone url (checkout) command (including all checkout scm in the examples below).

The docker.build() method can also override the default Dockerfile by providing the -f parameter. When using this method, the last value in the string must be the path to the Dockerfile.

For example:

node {
    checkout scm
    def dockerfile = 'Dockerfile.test'
    def customImage = docker.build("my-image:${env.BUILD_ID}", "-f ${dockerfile} ./dockerfiles") 
}

Note:

  • The my-image:${env.BUILD_ID} is built based on the Dockerfile.test found in the ./dockerfiles/ directory.

In the above examples, the variables are defined using the def keyword, which is used for the build image operation. These variables can be used to push the Docker image to a private registry using the push() method.

For example:

node {
    checkout scm
    def customImage = docker.build("my-image:${env.BUILD_ID}")
    customImage.push()
}

Note:

  • The push() method not only pushes the Docker image of the custom pipeline to the image repository, but also pushes the application code after code compilation and image building to a private repository.

The push() method can also customize the tag of the image when pushing.

For example:

node {
    checkout scm
    def customImage = docker.build("my-image:${env.BUILD_ID}")
    customImage.push()

    customImage.push('latest')
}

Using Multiple Containers #

In the previous Pipeline chapter, there was an introduction on how to use multiple agents in the scripted pipeline syntax. Let’s review it again:

node('jenkins-slave1') {

    stage('Back-end') {
        docker.image('maven:3-alpine').inside {
            sh 'mvn --version'
        }
    }

    stage('Front-end') {
        docker.image('node:7-alpine').inside { 
            sh "node --version"
        }
    }
}

Combining the knowledge from the previous two sections, we can extend the syntax above.

For example, we can use different slave nodes to start different containers as the execution environment for the pipeline at different stages.

Here is an example:

node() {

    stage('Back-end') {
        node('slave1') {
            stage("test1") {
                docker.image('maven:3-alpine').inside {
                    sh 'mvn --version'
                }
            }
        }
    }

    stage('Front-end') {
        node('slave2') {
            stage('test2') {
                docker.image('node:7-alpine').inside { 
                    sh "node --version"
                }
            }
        }

    }
}

With this approach, different containers will be started on different slave nodes to execute the pipeline for different stages.

Using a Remote Docker Server #

In addition to using the Docker process on the Jenkins node host machine, you can also use the Docker process on other servers. The docker pipeline provides the withServer method to connect to a remote Docker server.

To use the process on a remote Docker server, you need to enable port 2375 on the Docker server, as shown below:

Modify the configuration file /usr/lib/systemd/system/docker.service:

# Append -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock after ExecStart=/usr/bin/docker daemon

# For example:
ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock

# After modifying the file, reload the daemon and restart Docker
$ systemctl daemon-reload
$ systemctl restart docker

# Check the process
$ netstat -ntlp|grep 2375
tcp6       0      0 :::2375                 :::*                    LISTEN      5554/dockerd 

After configuring the server, modify the pipeline script as follows:

node {
    docker.withServer('tcp://192.168.176.160:2375',) {
        docker.image('maven:latest').inside {
           sh "mvn --version"
        }
    }
}

This way, the container will be started on the remote server, and you can execute any desired command, which is quite simple.

withRegistry (Using a Private Registry) #

By default, the Docker Workflow plugin for Jenkins integrates with Docker Hub’s private registry. When writing pipeline scripts, images pulled directly using the image directive are usually pulled from Docker Hub. The declarative pipeline syntax discussed above introduces several methods for authenticating with a private registry during the pipeline execution. For scripted pipelines, this becomes much simpler using the withRegistry() method.

For example:

node {
    checkout scm
    docker.withRegistry('https://registry.example.com') {

        docker.image('my-custom-image').inside {
            sh 'make test'
        }
    }
}

In this case, there is no authentication with the image registry service.

But what if you want to use a custom registry? When using a virtual machine as the agent proxy, you can use credentials and mount directories in the declarative pipeline syntax. However, with the Docker plugin, it becomes much simpler using the withRegistry() method from the Docker plugin. Authentication with the private registry is done through credentials set up in the Jenkins system.

For example, to authenticate with a Docker private registry requiring credentials, add a “Username/Password” credential in Jenkins, and then use the generated snippet with the credentials ID as the second parameter of withRegistry().

For example:

node {
    checkout scm

    docker.withRegistry('https://registry.example.com', 'credentials-id') {

        def customImage = docker.build("my-image:${env.BUILD_ID}")

        customImage.push()
    }
}

Declarative Syntax #

Let’s first review the usage of Docker as an agent proxy in pipeline syntax in the section about declarative syntax, and then make some additions.

First, let’s take a look at an example of using Docker as the agent in a declarative script:

pipeline {
    agent {
        docker { image 'node:7-alpine' }
    }
    stages {
        stage('Test') {
            steps {
                sh 'node --version'
            }
        }
    }
}

When executing the above pipeline script, Jenkins will automatically start the specified image (if it doesn’t exist, it will be downloaded automatically) and execute the specified stage inside the container. Once the stage is completed, the container will be destroyed.

In addition to using this method to generate containers, you can also use the docker pipeline plugin, for example:

pipeline {
    agent any
    stages {
        stage('Test') {
            steps {
                script {
                    docker.image('maven:latest').inside(){
                        sh 'mvn --version'
                    }
                }
            }
        }
    }
}

You need to enclose the pipeline syntax block in a script{} block.

Using Dockerfile #

In addition to starting a container by specifying a container image, you can also start a container using a custom Dockerfile. In this case, you need to put the Jenkinsfile and Dockerfile in the source code repository.

The pipeline supports building and running containers from the Dockerfile in the source code repository. By using the syntax agent { dockerfile true }, a image will be built from the Dockerfile and a container will be launched, instead of pulling from Docker Hub or a private repository.

Here is an example:

$ cat Dockerfile
FROM maven:latest
RUN apt-get update \
    && apt-get install lsof

$ cat Jenkinsfile
pipeline {
    agent { dockerfile true }
    stages {
        stage('Test') {
            steps {
                sh 'lsof'
            }
        }
    }
}

Put the above Dockerfile and Jenkinsfile in the source code repository, then configure the Jenkins project using the Pipeline script from SCM method and specify the address of the source code repository.

  • Screenshot

Note:

  • Script Path specifies that the file name should match the name of the Jenkinsfile.
  • Other parameters for using dockerfile can be found in the pipeline section, under the description of agent. They will not be discussed here.

Caching Data in Containers #

When using containers as the execution environment for pipelines, the containers are destroyed once the pipeline execution is complete, causing any artifacts built during the pipeline to be lost. When using the Maven build tool to compile code, it will by default download external dependencies and cache them in the local .m2 repository for subsequent compilations or other project compilations. Additionally, sometimes it is necessary to create a backup of the compiled service package, so it is important to cache certain dependencies locally for reuse.

Pipelines support adding custom parameters to Docker, allowing users to customize the mounting of volumes during container startup for data caching.

The following example demonstrates caching the Maven repository directory (~/.m2) during the pipeline runtime, thereby avoiding the inefficient process of downloading dependencies again when rebuilding the project.

pipeline {
    agent {
        docker {
            image 'maven:3-alpine'
            args '-v $HOME/.m2:/root/.m2'
        }
    }
    stages {
        stage('Build') {
            steps {
                sh 'mvn -version'
            }
        }
    }
}

Explanation:

  • The above pipeline example uses the args directive to pass runtime parameters to the container. The -v option specifies the directory to be mounted onto the host machine. If there are multiple host nodes, a shared storage directory can be used instead of a host machine directory.

Using Multiple Containers #

When building a project that contains code written in multiple languages, different compilation tools are needed to compile the code. Similarly, when different stages of a pipeline script require different images, multiple container images can be launched in a single pipeline script.

For example, if an application has both a backend API implementation in Java and a frontend implementation in JavaScript, you can use the agent {} directive to use different containers for compilation in different stages.

Here is an example:

pipeline {
    agent none
    stages {
        stage('Back-end') {
            agent {
                docker { image 'maven:3-alpine' }
            }
            steps {
                sh 'mvn --version'
            }
        }
        stage('Front-end') {
            agent {
                docker { image 'node:7-alpine' }
            }
            steps {
                sh 'node --version'
            }
        }
    }
}

Explanation:

  • In this example, the agent directive at the top level specifies a proxy host of none, which requires setting a separate agent proxy in each stage.
  • Different images are used to start the build environment for the backend and frontend respectively.

If you want to start containers on a specific node or start different containers on different nodes, you can use the Docker Pipeline plugin.

Here is an example of starting two containers on the slave1 node:

pipeline {
    agent { node { label 'slave1' } }

    stages {
        stage('Build') {
            steps {
                echo 'Building..'
                sh 'npm install'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing..'
                script {
                    docker.image('selenium:latest').inside(){
                        .......
                    }
                }
            }
        }
        stage('build') {
            steps {
                script {
                    docker.image('maven:lastet').inside(){
                        docker build ..
                    }
                }
            }
        }
    }
}

To use multiple nodes and start multiple containers, use the following configuration:

pipeline {
    agent none

    stages {
        stage('Test') {
            agent { node { label 'jenkins-slave1'} }
            steps {
                echo 'Testing..'
                script {
                    docker.image('selenium:latest').inside(){
                        .......
                    }
                }
            }
        }
        stage('build') {
            agent { node { label 'jenkins-slave169'} }
            steps {
                script {
                    docker.image('maven:lastet').inside(){
                        docker build ..
                    }
                }
            }
        }
    }
}

Although this implementation is possible, it may not be commonly used. Feel free to experiment with it if you are interested.

Using Private Repositories #

After compiling and building the application code into an image, it needs to be pushed to a specified private repository. This process involves authentication with the private repository service. So how can we authenticate with the private repository service in declarative pipeline syntax? You can refer to the following methods:

Using the docker login command

If you want a simple solution and are confident in Jenkins’ security settings, you can choose to use the docker login command with the plain text password. For example:

stage('Docker Push') {
   steps {
       sh "docker login -u xxx -p xxx registry_url"
       sh 'docker push 192.168.176.154/base/nop:latest'
   }
}

In this example, the docker login command is used directly in the pipeline to authenticate with the private repository service.

If you find the plain text password too obvious, you can also add a password type parameter using the “Parameterized Build” option and use the variable name directly as the -p parameter when logging in. However, this method requires entering the password every time the build runs, which may seem cumbersome. In this case, the credentials mentioned in the previous section can come in handy.

In addition to providing parameters using the “Parameterized Build” option, you can also use Jenkins credentials to provide the username and password for authentication with the private repository service.

stage('Docker Push') {
      steps {
        withCredentials([usernamePassword(credentialsId: 'dockerregistry', passwordVariable: 'dockerHubPassword', usernameVariable: 'dockerHubUser')]) {
          sh "docker login -u ${env.dockerHubUser} -p ${env.dockerHubPassword}"
          sh 'docker push 192.168.176.155/library/base-nop:latest'
        }
      }
    }

Note:

  • This code snippet can be generated using the “Snippet Generator”.

You can also directly specify the repository URL and credentials ID using parameters, as shown below:

agent { 
       docker {
              image '192.168.176.155/library/jenkins-slave'
              args '-v $HOME/.m2:/root/.m2  -v /data/fw-base-nop:/data  -v /var/run/docker.sock:/var/run/docker.sock'
              registryUrl 'http://192.168.176.155'
              registryCredentialsId 'auth_harbor'
       }
}

This achieves the same effect as the withCredentials method mentioned earlier. However, it is worth noting that when using the Docker plugin, it is recommended to use the scripted syntax as it integrates better with the plugin.

Using directory mounting

If you don’t want to use credentials in Jenkins, you can use the method of mounting the Docker configuration file. When logging in to the private repository using the docker command, the authentication information is saved in a file specified by the local Docker server. In this case, you can use the args parameter to mount that file to a specified location inside the container.

For example:

agent { 
       docker {
              image '192.168.176.155/library/jenkins-slave'
              args '-v $HOME/.m2:/root/.m2  -v /data/fw-base-nop:/data  -v /var/run/docker.sock:/var/run/docker.sock -v /root/.docker/config.json:/root/.docker/config.json'
       }
}

Note:

  • The location of the Docker authentication file may vary depending on the installation method of Docker. For Docker installed using the yum method, the authentication file is /root/.docker/config.json, which stores the authentication information. To find the specific path of this file, you can use the docker login registry_url command, and it will prompt the path to store the authentication file upon successful authentication.

Using the Docker pipeline plugin

When using the Docker pipeline plugin, it is relatively straightforward to apply it in scripted syntax. However, it can also be used in declarative pipelines, but you need to enclose the plugin’s methods in a script{} block.

For example, to use a private repository in a declarative pipeline, you can write it like this:

pipeline {
    agent ...
    stages {
        stage {
            steps {
                ...
            }
        }
        stage {
            steps {
                script {
                    docker.withRegistry('https://registry.example.com', 'credentials-id') {
                        def customImage = docker.build("my-image:${env.BUILD_ID}")
                        customImage.push()
                    }
                }
            }
        }
    }
}

This approach can also be applied to other properties and methods of the Docker plugin.

That’s it for the syntax explanations for integrating Pipeline and Docker. In the following sections, examples will be provided to demonstrate the process of continuous delivery and deployment using Jenkins and Docker integration.