18 Detailed Syntax of Ansible Plugin and Continuous Deployment Services to Kubernetes Clusters

18Detailed Syntax of Ansible Plugin and Continuous Deployment Services to Kubernetes Clusters #

In actual work, using container orchestration systems for container orchestration has become the norm for using container technology. In the previous Ansible chapter, only the deployment of services to Docker containers was introduced, and no explanation was given for container orchestration systems. It only introduced the use of bash shell commands and plugin configuration in the Jenkins UI. Based on this, this section will explain how to use Ansible in a Pipeline and demonstrate it through examples.

This section mainly covers two aspects:

  1. Syntax introduction and usage examples of using the Ansible plugin in the pipeline script.
  2. Continuous deployment of built images to the container orchestration system Kubernetes.

The reason why this chapter is placed at the end of the course is because it mainly discusses continuous deployment. Continuous delivery operations have already been introduced in previous chapters, so we can directly use the continuous delivery operations from earlier.

Ansible Plugin Syntax Explanation #

In a Pipeline-type job, you can generate syntax fragments using an Ansible snippet generator or execute shell commands directly in the pipeline. Here is a brief explanation of how to use the Ansible syntax fragments.

Go to the pipeline syntax page and click on the “Snippet Generator” menu. In the page you are redirected to, the first option in the “Steps” dropdown is the snippet generator parameter we want to use: “ansiblePlaybook Invoke and ansible playbook”.

It looks like this:

Where:

Ansible tool: This option parameter is a dropdown menu used to select the list of Ansible commands (in my case, I only configured the ansible and ansible-playbook commands). Please note that the list of tools in this dropdown menu needs to be defined in Jenkins’ “Global Tool Configuration” menu.

Playbook file path in workspace: This parameter is used to specify the playbook file to be executed. By default, the file is searched for in the current workspace ($WORKSPACE) path. Please note that this parameter can only specify a specific file name and cannot use regular expressions or specify multiple files.

Inventory file path in workspace: This parameter is used to specify the inventory file of hosts. The default path is also the current workspace directory. If this parameter specifies a directory, it will use the hosts defined in all files in that directory.

SSH connection credentials: This specifies the credentials for connecting to the server, the same as the credentials used in the previous chapter’s “publish over ssh” plugin. If the machine executing the ansible-playbook command has already set up server passwordless authentication, this field can be left blank. Also, if you want to use this parameter, make sure that the sshpass command is available on the machine where the ansible-playbook command is executed.

Vault credentials: For playbook files encrypted using the ansible valut command, you can add this parameter (to specify a credential) to automatically decrypt the playbook when executing it.

Use become: Determines whether to enable become username.

Become username: Used in conjunction with the Use become parameter to specify the user to run the task.

Host subset: A subset of hosts. If the inventory (hosts) file defined in the playbook is a group, and the group has multiple hosts (IP or domain name), this parameter can be used to specify which hosts in the group to execute the playbook task on. If there are multiple hosts, they should be separated by commas (",").

Tags: Tags for tasks in the playbook. If this parameter is set, only tasks in the playbook that match the value of this parameter will be executed.

Tags to skip: Tags of tasks to skip. If this parameter is set, tasks in the playbook that match the value of this parameter will not be executed.

Task to start at: Used to start executing the playbook from the specified task. The value of this parameter is used to match the names of tasks defined in the playbook.

Number of parallel processes to use: Corresponds to the -f parameter in ansible, specifying the number of concurrent threads.

Disable the host SSH key check: When using SSH to connect to a host, if it is the first time connecting to the host, SSH will check if the host’s record exists in the local known_hosts file. If it does not exist, manual confirmation is required. When using the ssh command in the console, manual confirmation can be done, but when using the ansible command, the host_key_checking = False needs to be set in the ansible.cfg configuration file. In Jenkins, you can check this parameter to disable the check.

Colorized output: Whether to switch the console to color encoding, which is not of much use.

Extra parameters: Used to specify the parameter variables to be used, with the “-e” parameter added.

After introducing the basic parameters of the Ansible plugin, let’s take a look at how to use this plugin.

Here is an example of the syntax generated by the snippet generator mentioned earlier:

ansiblePlaybook(
    become: true, 
    colorized: true, 
    credentialsId: '', 
    disableHostKeyChecking: true, 
    extras: '..', 
    installation: 'ansible-playbook', 
    inventory: '', 
    limit: '', 
    playbook: '', 
    skippedTags: '', 
    startAtTask: '', 
    sudo: true, 
    tags: '', 
    vaultCredentialsId: ''
)
This syntax block is used in the same way in declarative syntax and script syntax. In actual work, not all parameters may be used. Choose the parameters according to your own needs.

Here's a simple example to demonstrate. For example, if I want to execute the task under `test.yaml`, only on hosts `192.168.176.149` and `192.168.176.148`.

The code is as follows:

```groovy
pipeline {
    agent { node { label 'master' } }
    stages {
        stage('Test') {
            steps {
                sh "cp -R /data/nfs/ansible $WORKSPACE/"
                ansiblePlaybook (
                    become: true, 
                    disableHostKeyChecking: true,
                    limit: '192.168.176.149,192.168.176.148',
                    installation: 'ansible-playbook',
                    inventory: './ansible/hosts',
                    playbook: './ansible/test.yaml',
                    extras: ''
                )
            }
        }
    }
}

Explanation:

cp -R /data/nfs/ansible $WORKSPACE/: Why execute this command? Because when using the Docker pipeline plugin or Kubernetes plugin, the dynamically generated containers do not have these files and directories by default. At this time, you need to mount the required directories into the container using shared storage. Here, we simulate running the playbook inside the container. Of course, if the files used are placed in the source code repository, you can ignore this step.

The limit parameter is used to limit the nodes where the playbook task is executed. This example indicates that the task is executed on the nodes 192.168.176.149 and 192.168.176.148.

installation represents the use of the ansible-playbook command.

inventory and playbook respectively specify the path of the inventory file and playbook file using relative paths.

The contents of the hosts and test.yaml files are as follows:

$ cat hosts
[k8s_master]
192.168.176.148
192.168.176.149
192.168.176.150

$ cat test.yaml
---
- hosts: k8s_master
  gather_facts: no
  tasks:
  - name: test
    shell: hostname
    register: host_name
  - name: debug
    debug:
    msg: "{{ host_name.stdout }}"

The execution result is as follows:

Execution Result

From the execution result, we can see that the command was executed on the specified hosts.

With the example above, you should have a preliminary understanding of how to use the Ansible plugin. Next, I will explain how to integrate this plugin’s syntax block into the pipeline script in the previous chapters and how to use it when the agent is a virtual machine, a container, or orchestrated through Kubernetes.

## Using Ansible Plugin When Agent is a Virtual Machine

When the agent proxy is a virtual machine, whether it is a master node or a node, it is necessary to ensure that the ansible tool is installed on the node.

In the initial practice chapter of the pipeline, the code has been compiled and the image has been built and pushed to the private repository. We only need to modify the playbook script used in the "Continuous Delivery and Deployment with Ansible" chapter under the "Deploy Services to Containers" section, keeping only the operations related to containers and images.

The modified playbook script is as follows:

```markdown
$ cat deploy.yaml 

- hosts: "{{ target }}"
  remote_user: root
  gather_facts: False

  vars:
    dest_dict: "/data/{{ project_name }}/"

  tasks:
    - name: Check if the directory exists
      shell: ls {{ dest_dict }}
      register: dict_exist
      ignore_errors: true

    - name: Create relevant directories
      file: dest="{{ item }}" state=directory  mode=755
      with_items:
        - "{{ dest_dict }}"
        - /data/logs/{{ project_name }}
      when: dict_exist is failure

    - name: Check if the container exists
      shell: "docker ps -a --filter name={{ project_name }} |grep -v COMMAND"
      ignore_errors: true
      register: container_exists

    - name: Check the container status
      shell: "docker ps -a --filter name={{ project_name }} --format '{{ '{{' }} .Status {{ '}}' }}'"
      ignore_errors: true
      register: container_state
      when: container_exists.rc == 0

    - name: Stop the container
      shell: "docker stop {{ project_name }}"
      when: "('Up' in container_state.stdout)"
      ignore_errors: true

    - name: Delete the container
      shell: "docker rm {{ project_name }}"
      when: container_exists.rc == 0
      ignore_errors: true

    - name: Check if the image exists
      command: "docker images --filter reference={{ registry_url }}/{{ project_group }}/{{ project_name }}* --format '{{ '{{' }} .Repository {{ '}}' }}:{{ '{{' }}.Tag {{ '}}' }}'"
      register: image_exists
      ignore_errors: true

    - name: Delete the image
      shell: "docker rmi -f {{ item }}"
      loop: "{{ image_exists.stdout_lines }}"
      ignore_errors: true
      when: image_exists.rc == 0

    - name: Start the container
      shell: 'docker run {{ run_option }} --name {{ project_name }} {{ image_name }}'

The main modification is the command to get the image list in the “Check if the image exists” task, adding two variables to match the image URL (which needs to be modified according to your actual situation). These two variables need to be added when executing the playbook.

For the variables used in executing the playbook, we can define them directly in the pipeline.

First, let’s take a look at deploying using the ansible-playbook command in the pipeline.

stage('Deploy Image') {
    environment {
        container_run_option="--network host -e TZ='Asia/Shanghai' -d -v /etc/localtime:/etc/localtime:ro -m 2G --cpu-shares 512 -v /data/logs/${project_name}:/data/logs/${project_name}"
    }

    steps{
        sh """
        cp -R /data/nfs/ansible $WORKSPACE/
        echo $container_run_option

        ansible-playbook -i $WORKSPACE/ansible/hosts $WORKSPACE/ansible/deploy.yaml -e "workspace='${WORKSPACE}' registry_url=$registry_url project_group=$project_group target='192.168.176.160' project_name='$project_name' run_option='$container_run_option' image_name='${registry_url}/${project_group}/${project_name}:${BUILD_ID}'"
        """
    }
}

Note: I have already set up passwordless authentication between the ansible host and the target host in the ansible-playbook command. If passwordless authentication is not set up between the ansible host and the target host, you can specify the Jenkins credential by adding the credentialsId parameter for remote server authentication.

Since we have introduced the syntax of the ansible plugin, why do we still use the ansible-playbook command? It is to test whether the playbook script can be executed correctly, whether the passed variables are valid, and whether the entire pipeline can be executed successfully, among others. After all, when executing the playbook, many variables are passed in. If you directly use the ansible plugin, you may encounter unexpected issues. Sometimes, a seemingly trivial double quote or single quote can cause the playbook execution to fail.

If we switch to using the ansible plugin, the pipeline syntax is as follows:

stage('Deploy Image') {
    environment {
        container_run_option="\'--network host -e TZ=Asia/Shanghai -d -v /etc/localtime:/etc/localtime:ro -m 2G --cpu-shares 512 -v /data/logs/${project_name}:/data/logs/${project_name}\' "
    }

    steps{
        sh """
        cp -R /data/nfs/ansible $WORKSPACE/
        echo $container_run_option
        """
        ansiblePlaybook (
            become: true, 
            disableHostKeyChecking: true, 
            installation: 'ansible-playbook', 
            inventory: './ansible/hosts', 
            playbook: './ansible/deploy.yaml',
            extras: ' -e "workspace=${WORKSPACE} target="192.168.176.160" registry_url=$registry_url project_group=$project_group project_name=$project_name run_option=$container_run_option image_name=${registry_url}/${project_group}/${project_name}:${BUILD_ID}"'

        )
    }
}

When using the ansible plugin, be careful with variable references. Variables are passed in through the extras parameter, and when using this parameter, be sure to include the -e option. For the parameters used during docker runtime, you need to enclose them in single quotes (') and add escaping (optional) to prevent errors when converting to actual variable values.

For example, for the container_run_option runtime parameter, if you don’t add single quotes and escapes, the actual passing to the extras parameter will look like this:

run_option= --network host -e TZ=Asia/Shanghai -d -v /etc/localtime:/etc/localtime:ro -m 2G --cpu-shares 512 -v /data/logs/fw-base-nop:/data/logs/fw-base-nop

The corresponding docker startup command passed to the playbook will be:

docker run --network --name fw-base-nop 192.168.176.155/base/fw-base-nop:289

This will cause the container startup to fail.

However, if you add quotes and escapes, it will look like this:

run_option='--network host -e TZ=Asia/Shanghai -d -v /etc/localtime:/etc/localtime:ro -m 2G --cpu-shares 512 -v /data/logs/fw-base-nop:/data/logs/fw-base-nop'

This is the correct docker runtime parameter.

This example only demonstrates the deployment of container-related operations using ansible. The code compilation and image building are still implemented using pipeline syntax. If you don’t want to implement these parts using pipeline scripts, you can also implement these operations using playbook scripts. The details of how to write this are not further discussed here.

Using Ansible Plugin when the agent is a container #

When the agent is a container, there are two scenarios to consider. Based on the previous chapters, you should already have an idea. Yes, these two scenarios are: using the Docker Pipeline plugin to generate containers on virtual machines, and using the Kubernetes plugin to orchestrate containers. When using the Docker Pipeline plugin, you can choose to use the Ansible plugin inside the container, or use the Ansible plugin on the virtual machine. However, when using the Kubernetes plugin, you can only use the Ansible plugin inside the container. In both scenarios, make sure that the ansible-playbook command is available on the agent.

Using the Ansible plugin on a virtual machine was discussed in the previous section. Now, let’s focus on how to use the Ansible plugin inside a container.

To use the Ansible plugin inside a container, you need to use a separate Ansible image instead of mounting the executable file inside the container. This is similar to using the Docker executable file, where the usage depends on the current system environment. Furthermore, Ansible runtime depends on the Python environment and various modules, so simply copying the binary file to the image is not applicable. Fortunately, there are various versions of the Ansible image available on Docker Hub, so we can directly use them.

First, let’s perform a simple test using the image. We will reuse the example from the previous section, but this time we will use a container as the agent to execute the playbook.

The code is as follows:

pipeline {
    agent { 
        docker{
            image 'gableroux/ansible:2.7.10'
            args '-v /data/:/data/ -v /root/.ssh:/root/.ssh'
        }
    }
    stages('build') { 
        stage('test-ansible'){
            steps{
                sh "ansible-playbook -i /data/nfs/ansible/hosts /data/nfs/ansible/test.yaml "
            }
        }
    }
}

This example will automatically start a container on a specific node.

Here are some important points to note:

  1. When executing the ansible-playbook tasks inside the container, you need to connect to remote hosts. The ansible host (container) does not have automatic authentication to the remote servers. Apart from adding host parameters in the inventory file (hosts), you can also mount the SSH directory of the host machine. In this example, we are using the root user, so this allows the container to connect to remote servers without separate authentication. However, this method requires ensuring that the host machine can connect to the target server using public key authentication.

  2. In addition, when using the ansiblePlaybook method of the Ansible plugin, you can also authenticate to the remote server using Jenkins credentials through the credentialsId parameter. In this case, there are two things to consider:

    • If you are not using the method of mounting the .ssh directory (or adding the ansible user and password parameters in the inventory file), you need to also configure the ansible configuration file ansible.cfg. This is because starting a dynamic container is equivalent to starting a new machine each time, and when connecting to remote hosts, ansible checks whether the local known_hosts file contains the fingerprint key of the remote host. If it does not, ansible will require manual confirmation, but ansible tasks cannot receive manual input by default. To solve this problem, you need to configure host_key_checking = False in the ansible configuration file. Alternatively, you can solve this problem by passing the disableHostKeyChecking parameter when using the Ansible plugin.

    • If you are not using the method of mounting the .ssh directory and use the credentialsId parameter, the ansiblePlaybook method will authenticate to the remote host using the sshpass command (as seen in the Jenkins console log when executing the task). By default, the image used does not have this command, so you need to install it.

Custom Ansible Image

Based on the ansible image used in the previous example, here is a Dockerfile to create a custom image without using the Ansible plugin:

FROM gableroux/ansible:2.7.10 

RUN echo "http://mirrors.aliyun.com/alpine/latest-stable/main/" > /etc/apk/repositories \
    && echo "http://mirrors.aliyun.com/alpine/latest-stable/community/" >> /etc/apk/repositories \
    && apk add --update --no-cache openssh sshpass \
    && echo "[defaults] \n" > ~/.ansible.cfg \
    && echo "host_key_checking = False" >> ~/.ansible.cfg

Note:

If you don’t want to use the ansible.cfg configuration file to define host_key_checking = False, you can pass this configuration as a variable when executing Ansible, for example:

ansible-playbook xxx.yaml -e "host_key_checking = False"

To build the image, run:

docker build -t 192.168.176.155/library/ansible:v2.7.10 .

If you want to execute it on a specific node, use the following code:

pipeline {
    agent { node { label 'slave-43' } }
    stages('build') { 
        stage('test-ansible'){
            steps{
                script{
                    sh "hostname"
                    docker.image('gableroux/ansible:2.7.10').inside('-v /data/:/data/ -v /root/.ssh:/root/.ssh '){
                        sh "ansible-playbook -i /data/nfs/ansible/hosts /data/nfs/ansible/test.yaml "
                    }
                }
            }
        }
    }
}

This example will start a container on the slave-43 node to execute the playbook.

With the basic demonstration completed, using it in actual scripts should be much easier.

There are many methods to achieve this, and here is one example using multiple agents:

pipeline {
    agent { node { label 'master' } }

    environment {
        project_name = 'fw-base-nop'
        jar_name = 'fw-base-nop.jar'
        registry_url = '192.168.176.155'
        project_group = 'base'
}
stages('build') { 
    stage('Pull Code and Compile'){
        steps {
            script {
                docker.image('alpine-cicd:latest').inside('-v /root/.m2:/root/.m2'){
                    checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                    echo "Start building"
                    sh "cd $project_name && mvn clean install -DskipTests -Denv=beta"
                }
            }

        }
    }

    stage('Build Image'){
        steps {
            script{
                jar_file=sh(returnStdout: true, script: "find ${WORKSPACE} ./ -name $jar_name |head -1").trim()

                docker.image('alpine-cicd:latest').inside('-v /root/.m2:/root/.m2 -v /data/:/data/'){
                    sh "cp $jar_file /data/$project_group/$project_name/"
                }
            }

        }
    }
    stage('Upload Image'){
        steps {
            script {
                docker.withRegistry('http://192.168.176.155', 'auth_harbor') {
                    docker.image('alpine-cicd:latest').inside('-v /data/:/data/ -v /var/run/docker.sock:/var/run/docker.sock'){
                        def customImage=docker.build("${registry_url}/${project_group}/${project_name}:${env.BUILD_ID}","/data/${project_group}/${project_name}/")
                        customImage.push()
                    }

                }
                sh "docker rmi -f ${registry_url}/${project_group}/${project_name}:${env.BUILD_ID}"
            }
        }
    }
    stage('Deploy Image'){
        environment{
            container_run_option="--network host -e TZ='Asia/Shanghai' -d -v /etc/localtime:/etc/localtime:ro -m 2G --cpu-shares 512 -v /data/logs/${project_name}:/data/logs/${project_name}"
        }
        steps{
            script{
                docker.image('gableroux/ansible:2.7.10').inside('-v /data/:/data/ -v /root/.ssh:/root/.ssh'){
                    sh """
                    ansible-playbook -i /data/nfs/ansible/hosts /data/nfs/ansible/deploy.yaml -e "workspace='${WORKSPACE}' registry_url=$registry_url project_group=$project_group target_host='192.168.176.160' project_name='$project_name' run_option='$container_run_option' image_name='${registry_url}/${project_group}/${project_name}:${BUILD_ID}'"
                    """

                }
            }

        }

    }

}    

When the agent proxy is orchestrated through Kubernetes #

When the agent proxy is orchestrated using the Kubernetes system, the specific deployment process is the same as the one using Docker containers as the agent proxy. The difference is that since Kubernetes is being used, the deployment of application containers is not standalone; it needs to be deployed within the Kubernetes cluster. Therefore, we need to rewrite the ansible-playbook script.

We simply need to copy the k8s resource object file deployment.yaml used in the previous section to the specified directory on the k8s master host and apply it.

Just like in the previous two sections, before starting, we can test the process using the ansible-playbook command:

def project_name = 'fw-base-nop'  // Project name
def registry_url = '192.168.176.155' // Image repository address
def project_group = 'base'

podTemplate(cloud: 'kubernetes',namespace: 'default', label: 'pre-CICD',
  serviceAccount: 'default', containers: [
  containerTemplate(
      name: 'jnlp',
      image: "192.168.176.155/library/jenkins-slave:sonar",
      args: '${computer.jnlpmac} ${computer.name}',
      ttyEnabled: true,
      privileged: true,
      alwaysPullImage: false,
    ),
  containerTemplate(
      name: 'ansible',
      image: "192.168.176.155/library/ansible:v2.7.10",
      ttyEnabled: true,
      privileged: true,
      command: 'cat',
      alwaysPullImage: false,
    ),
  ],
  volumes: [
       nfsVolume(mountPath: '/tmp', serverAddress: '192.168.177.43', serverPath: '/data/nfs', readOnly: false),
       hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
       nfsVolume(mountPath: '/root/.m2', serverAddress: '192.168.177.43', serverPath: '/data/nfs/.m2', readOnly: false),
  ],
){
  node('pre-CICD') {
    stage('deploy'){
            container('ansible'){
                sh """
                ansible-playbook -i /tmp/ansible/hosts /tmp/ansible/deploy-kubernetes.yaml -e "project_group=$project_group project_name=$project_name"
                """
            }
    }
   }
  } 

Explanation:

In the stage('deploy') stage, we simply test the deployment to the Kubernetes cluster using the ansible-playbook command. The contents of deploy-kubernetes.yaml are as follows:

---
  - hosts: k8s_master
    gather_facts: False
    tasks:
    - name: copy deployment
      copy: src=/tmp/k8s_yaml/{{ project_group }}/{{ project_name }}/deployment.yaml dest=/data/{{ project_group }}/{{ project_name }}/

    - name: exec 
      shell: kubectl apply -f /data/{{ project_group }}/{{ project_name }}/deployment.yaml

The execution result is as follows:

Using the ansible command can complete the deployment, which proves that both the playbook file and the ansible-playbook command are correct. Now let’s test it using the ansible plugin.

Since we are using script-style syntax, the pipeline code is as follows:

stage('deploy'){
    container('ansible'){
        ansiblePlaybook (
        become: true, 
        disableHostKeyChecking: true, 
        installation: '/usr/local/bin/ansible-playbook', 
        inventory: '/tmp/ansible/hosts',
        credentialsId: '160-ssh',
        limit: '192.168.176.148',
        playbook: '/tmp/ansible/deploy-kubernetes.yaml',
        extras: " -e 'project_group=$project_group project_name=$project_name'"
        )
    }
}

It is basically the same as the configuration in Docker.

The complete code is as follows:

def project_name = 'fw-base-nop'  // Project name
def registry_url = '192.168.176.155' // Image repository address
def project_group = 'base'

podTemplate(cloud: 'kubernetes',namespace: 'default', label: 'pre-CICD',
  serviceAccount: 'default', containers: [
  containerTemplate(
      name: 'jnlp',
      image: "192.168.176.155/library/jenkins-slave:sonar",
      args: '${computer.jnlpmac} ${computer.name}',
      ttyEnabled: true,
      privileged: true,
      alwaysPullImage: false,
    ),
  containerTemplate(
      name: 'ansible',
      image: "192.168.176.155/library/ansible:v2.7.10",
      ttyEnabled: true,
      privileged: true,
      command: 'cat',
      alwaysPullImage: false,
    ),
  ],
  volumes: [
       nfsVolume(mountPath: '/tmp', serverAddress: '192.168.177.43', serverPath: '/data/nfs', readOnly: false),
       hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
       nfsVolume(mountPath: '/root/.m2', serverAddress: '192.168.177.43', serverPath: '/data/nfs/.m2', readOnly: false),
  ],
){
  node('pre-CICD') {
    stage('build') {
        try{
            container('jnlp'){
                stage('clone code'){
                    checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
                    script {
                        imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
                    }
                    echo "${imageTag}"
                    currentBuild.result == 'SUCCESS'
                }

                stage('Build a Maven project') {
                    sh "cd ${project_name} && mvn clean install -DskipTests -Pproduct -U"
                    currentBuild.result = 'SUCCESS'
                } 
            }
        }catch(e){
            currentBuild.result = 'FAILURE'
        }
        if (currentBuild.result == null || currentBuild.result == 'SUCCESS') {
             container('jnlp'){
                 stage('sonar test'){
                     withSonarQubeEnv(credentialsId: 'sonarqube') {
                         sh "sonar-scanner -X "+
                         "-Dsonar.login=admin " +
                         "-Dsonar.language=java " + 
                         "-Dsonar.projectKey=${JOB_NAME} " + 
                         "-Dsonar.projectName=${JOB_NAME} " + 
                         "-Dsonar.projectVersion=${BUILD_NUMBER} " + 
                         "-Dsonar.sources=${WORKSPACE}/fw-base-nop " + 
                         "-Dsonar.sourceEncoding=UTF-8 " + 
                         "-Dsonar.java.binaries=${WORKSPACE}/fw-base-nop/target/classes " + 
                         "-Dsonar.password=admin " 
                     }
                }
             }
            withDockerRegistry(credentialsId: 'auth_harbor', url: 'http://192.168.176.155') {
                stage('build and push docker image') {
                    sh "cp /tmp/Dockerfile ${project_name}/target/"
                    def customImage = docker.build("${registry_url}/library/${project_name}:${imageTag}-${BUILD_NUMBER}","--build-arg jar_name=${project_name}.jar ${project_name}/target/")
                    echo "Pushing the image"
                    customImage.push()
                }
                stage('delete image') {
                    echo "Deleting the local image"
                    sh "docker rmi -f ${registry_url}/library/${project_name}:${imageTag}-${BUILD_NUMBER}"
                }
            }
        }else {
            echo "---currentBuild.result is:${currentBuild.result}"
            emailext (
                subject: "'${env.JOB_NAME} [${env.BUILD_NUMBER}]' Build Exception",
                body: """
                Details:<br>
                    Failure: Job ${env.JOB_NAME} [${env.BUILD_NUMBER}] <br>
                    Status: ${env.JOB_NAME} jenkins build exception <br>
                    URL: ${env.BUILD_URL} <br>
                    Project Name: ${env.JOB_NAME} <br>
                    Build ID: ${env.BUILD_NUMBER} <br>
                    Information: Compilation of the code failed
$ cat templates/deployments.yaml.j2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dev-{{server_name}}
  namespace: {{ns}}
  labels:
    app: {{server_name}}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: {{server_name}}
  template:
    metadata:
      labels:
        app: {{server_name}}
    spec:
      containers:
        - name: {{server_name}}
          image: registry.example.com/{{project_group}}/{{project_name}}:{{imageTag}}
          imagePullPolicy: IfNotPresent
          volumeMounts:
          ports:
            - containerPort: {{port}}
          env:
          - name: PROJECT_NAME
            value: "{{project_name}}"
          - name: LOGPATH
            value: "{{log_path}}"
          resources:
            limits:
              cpu: "{{cpu_limit}}"
              memory: "{{memory_limit}}"
            requests:
              cpu: "{{cpu_request}}"
              memory: "{{memory_request}}"
      volumes:
      - name: config-volume
        emptyDir: {}

说明:

这个模板文件是一个Deployment资源对象的yaml文件,使用了Jinja2模板语法。这样配置完文件后,步骤还没有结束,我们还需要在流水线脚本中添加下载部署依赖或者编译上传镜像到镜像仓库等动作(这里就是上面pipeline中的stage(compileUplodImage)),要多复杂就自己去多复杂,收工!

$ cat deployments.yaml.j2
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: dev-{{server_name}}
  namespace: {{ns}}
spec:
  replicas: 2
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  revisionHistoryLimit: 10
  template:
    metadata:
      labels:
        name: dev-{{server_name}}
    spec:
      hostNetwork: true
      terminationGracePeriodSeconds: 60
      imagePullSecrets:
      - name: mysecret
      containers:
      - name: dev-{{server_name}}
        image: 192.168.176.155/base/{{server_name}}:{{ tag }}
        imagePullPolicy: Always
        lifecycle:
          preStop:
            exec:
              command: ["rm","-r","-f","/data/logs/{{server_name}}/"]
        env:
        - name: TZ
          value: "Asia/Shanghai"
{% if java_opts %}
        - name: JAVA_OPTS
          value: {{java_opts}}
        - name: _JAVA_OPTIONS
          value: {{java_opts}}
{% else %}
        - name: JAVA_OPTS
          value: "-Xms1024m -Xmx1024m"
        - name: _JAVA_OPTIONS
          value: "-Xms1024m -Xmx1024m"
{% endif %}
        ports:
        - containerPort: {{port}}
        resources:
          limits:
            cpu: {{ cpu|int }}m
            memory: {{ mem|int }}Mi
          requests:
            cpu: {{ cpu|int }}m
            memory: {{ mem|int }}Mi
        volumeMounts:
{% if server_name == "test-enter" %}
        - name: dev-{{server_name}}-logs
          mountPath: /data/logs/test-enter
{% else %}
        - name: dev-{{server_name}}-logs
          mountPath: /data/logs/{{server_name}}
{% endif %}
        - name: host-resolv
          mountPath: /etc/resolv.conf
        - name: tz-config
          mountPath: /etc/localtime
      volumes:
      - name: host-resolv
        hostPath:
          path: /etc/resolv.conf
      - name: tz-config
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
      - name: dev-{{server_name}}-logs
        emptyDir: {}

Explanation:

This resource object fills in some variables:

  • server_name: represents the project name, corresponding to ${project_name} in the continuous delivery step.
  • ns: represents the namespace to deploy the resource object to, corresponding to ${project_group} in the continuous delivery step.
  • tag: represents the tag of the image, corresponding to ${imageTag}-${BUILD_NUMBER} in the continuous delivery step.
  • java_opts: represents JVM configuration, with the variable value passed in from external sources.
  • cpu/mem: CPU and memory settings, passed in from external variables.
  • port: service port, passed in from external variables.

The command being executed is:

values="server_name=${project_name} port=8083 cpu=1024 mem=2048 java_opts=${java_opts} ns=$project_group tag=${imageTag}-${BUILD_NUMBER}"

ansible-playbook /etc/ansible/dev-common.yaml -e "$values"

This example template is only for reference purposes and can be modified according to your actual needs in your work.

Continuous delivery and continuous deployment can be implemented in various ways. The most important thing is to learn to use these toolchains together and find the approach that suits you best.