08 Continuous Deployment of Services to Docker Containers With Jenkins Integrated With Ansible

08Continuous Deployment of Services to Docker Containers with Jenkins Integrated with Ansible #

In the previous chapters, we briefly introduced how to use Jenkins’ built-in plugins to deploy application code. In this chapter, we will continue to refactor and optimize the content of the previous Jenkins foundations chapter using the knowledge of Ansible introduced in the previous chapter.

Installation and Configuration #

We have already covered the basics of Ansible in the earlier chapters, so let’s focus on how to integrate Jenkins with Ansible.

There are three main ways to integrate Jenkins with Ansible:

  • One is to directly execute Ansible commands in the “Exec shell” section of a Jenkins job step.
  • Another way is to use the Ansible plugin installed in Jenkins.
  • The third way is to use the Ansible plugin in a pipeline, but we won’t cover that in this chapter.

Now let’s go through the usage methods for the first two ways, using a freestyle job as an example.

Using Exec shell #

With “Exec shell”, enter the following command in the input box:

ansible localhost -m shell -a "hostname"

Explanation:

This command is used to execute the hostname command on the local machine. - localhost is a built-in group in Ansible, representing the Ansible local machine.

In addition to using Ansible Ad-hoc commands, you can also use the ansible-playbook command to run playbooks. Feel free to give it a try if you are interested.

Using Ansible Plugin #

In the “Build” step, click “Invoke Ansible Ad-hoc command” and configure as follows:

Where:

Ansible installation: Select the Ansible command to use. The command is obtained from a dropdown list, which retrieves the values from the environment variables configured for Ansible in the global tools section of Jenkins. We have already covered this in previous chapters. If there are no values, you need to add the environment variables for Ansible again. For example, here are the environment variables I configured for Ansible in Jenkins:

Host pattern: Specify the list of hosts to operate on, which can be an IP address or a group name. Regardless of the type, make sure that the host IP or group name exists in the Ansible inventory file. However, “localhost” and “all” are exceptions, as they are built-in groups in Ansible.

Inventory: The inventory list of hosts. You can define it yourself, specifying whether to use a file or enter it manually. By default, it uses the file /etc/ansible/hosts.

Module: Specify the name of the Ansible module to use (the value specified by the -m parameter in the Ansible command). Ansible achieves batch management of hosts by using modules. If left blank, the default module is “command”.

Playbook path: This parameter is only available when you select the “Invoke Ansible Playbook” option. It is used to execute the playbook file you want to run.

Module arguments or command to execute: Specify the command to be executed on the target host, which is the value specified by the -a parameter in the Ansible command.

Credentials: Specify the credentials used to authenticate with the target host when connecting via Ansible. You can leave it blank if you have configured it in the Ansible inventory file.

Vault Credentials: Encrypted credentials, only supported for encrypted credentials of type “file” and “text”.

become: The user on the target host to run the command. It only supports a user on the target host who is a sudo user and does not require a password.

sudo: Elevates the user’s privilege to root.

In addition to using the Ad-hoc command, you can also use the ansible-playbook command and ansible-vault command, which correspond to the options “Invoke Ansible Playbook” (used to execute playbook scripts) and “Invoke Ansible Vault” (used to encrypt the playbook content) respectively.

After saving the configuration, run the job and the result will be as follows:

In practical work, using the integration between Jenkins and Ansible, it is relatively rare to use the ansible ad-hoc command. Most of the time, the code publishing and deployment work is done by executing playbooks.

Problems Encountered #

Whether using the ansible ad-hoc command or the ansible-playbook command, if the following errors occur after executing a job (ansible command not found).

Or (cannot connect to the remote host).

There are two reasons for the above error: one is that the command cannot be found in a true sense (or the host network is unreachable, or the host authentication fails); the other is the issue related to the permissions of the Jenkins user. Here is a simple explanation: the first situation is less common because after installing ansible, the global environment variable for ansible is already configured by default. As for host unreachability and host authentication, I believe many people have tested it; just execute a command in the server terminal to find out. I won’t dwell on it here. I mainly want to talk about the permissions issue of Jenkins.

After installing Jenkins, when executing a job task, it is executed by default using the Jenkins user, and the Jenkins user does not have permission to use ansible or to connect to remote servers using the root’s ssh key by default. Therefore, it is necessary to ensure that the user “jenkins” has execute permissions for these commands and files. Here are two methods to solve this problem.

Method 1

Modify the Jenkins configuration file.

$ vim /etc/sysconfig/jenkins
# Modify JENKINS_USER and uncomment the current line
JENKINS_USER="root"

Modify the user permissions of the Jenkins related folders (modify according to the actual situation, this step can also be skipped).

chown -R root:root /var/lib/jenkins
chown -R root:root /var/cache/jenkins
chown -R root:root /var/log/jenkins

Restart Jenkins (the restart method may be slightly different if Jenkins is installed in another way).

systemctl restart jenkins

Method 2

Configure the user terminal for Jenkins and change the shell for the Jenkins user to bash.

cat /etc/passwd
jenkins:x:989:985:Jenkins Automation Server:/var/lib/jenkins:/bin/bash

Configure the Jenkins user to connect to ssh without a password.

[root@ansible ]# su jenkins
bash-4.2$  ssh-keygen -t rsa

bash-4.2$ ssh-copy-id root@ip

Both methods can be used, but it is recommended to use Method 1. After configuring it, you can re-execute the job.

Basic Example #

After learning how to integrate Jenkins with Ansible, let’s demonstrate how to use ansible-playbook to deploy services.

Deploying Service to Tomcat #

Taking the test-helloworld project deployed in the “Jenkins Basic Practices” chapter as an example, although the use of the built-in plugin in that chapter achieved automated project deployment, it is difficult to meet the requirements when deploying projects to a large number of hosts using the publish over ssh plugin. So let’s take a look at how to use a playbook to deploy this project.

The script is as follows:

$ cat /home/single-playbook-helloworld.yaml

- hosts: "{{ host }}"
  gather_facts: False
  vars:
    war_file: "{{ workspace }}/target/Helloworldwebapp.war"
    project_dir: "/usr/share/tomcat/webapps/{{ project_name }}"

  tasks:
  - name: Check if directory exists
    shell: ls {{ project_dir }}
    register: dict_exist
    ignore_errors: true
  
  - name: Stop tomcat
    shell: systemctl stop tomcat
  
  - name: Backup old code
    shell: chdir={{ project_dir }}/ tar -czf /bak/{{ project_name }}-{{ build_num }}-$(date -d "today" +"%Y%m%d_%H%M%S").tar.gz {{ project_dir }}/ 
    when: dict_exist is succeeded
    ignore_errors: true
  
  - name: Delete old version configuration files
    file:
      state: absent
      dest: "{{ project_dir }}"
    when: dict_exist is succeeded
  
  - name: Clean cache
    shell: chdir={{ project_dir }}/../../ nohup rm -rf work/
  
  - name: Create directory
    file:
      state: directory
      dest: "{{ project_dir }}"
      mode: 0755
  
  - name: Unzip war package
    unarchive:
      src: "{{ war_file }}"
      dest: "{{ project_dir }}/"
      copy: yes
      mode: 0755
  
  - name: Start tomcat
    shell: systemctl restart tomcat

Procedure Description:

This script is suitable for both new and existing project service deployment.

In order to improve the flexibility of the playbook, variables are used in the playbook to specify the target host, the directory of the war package, the path to deploy the service on the remote server, and the backup path of the service. This improves the flexibility of the playbook and reduces maintenance costs.

At the beginning of the playbook, it checks if the project deployment directory exists to differentiate between a new project and an existing running project.

Whether it is a new project or an existing project, the tomcat service needs to be stopped during application deployment.

The “Backup old code” task determines whether to execute the current task based on the execution result of the “Check if directory exists” task.

In the above script, you can either configure it in the project using a plugin or execute the command directly in the “Exec shell” section. Here, for simplicity, the command is entered directly in the “Exec shell” section.

/opt/apache-maven-3.5.4/bin/mvn install
ansible-playbook -i /etc/ansible/hosts /home/single-playbook-helloworld.yaml -e "workspace=${WORKSPACE} host=192.168.176.160 project_name=hello_world build_num=$BUILD_ID"

Here is the execution result:

This is how the service is deployed to the VM using Ansible.

Now let’s look at another example.

Deploying Services to Containers #

Use Ansible to deploy microservices to containers. The general process is similar to the example above, but there are some advanced techniques involved in using containers and Ansible.

Referring to the example in the “Practical Jenkins” chapter under “Deploying Services to Containers,” let’s briefly explain the limitations of deploying services to containers using the built-in plugin compared to the previous Ansible example:

If you deploy a service as a separate container, and you want to meet the requirements of load balancing and high availability, you will need to deploy the image to multiple servers. If you are using container orchestration tools like Swarm, Kubernetes, or Mesos, it’s as simple as replacing the image name with a command, provided there are no special configurations. However, if you need to deploy containers to multiple virtual hosts and use load balancing software like HAProxy or Nginx as proxies, this method is not suitable.

In addition, if you have a large number of projects, it is not practical to configure the “publish over ssh” plugin for each project in Jenkins. If you need to modify certain configurations for some projects along the way, you would have to modify them one by one, which is no different from manual operations and would waste time and decrease efficiency.

Furthermore, if multiple projects are built simultaneously on the same Jenkins host, it would put a heavy load on Jenkins and may cause system or Jenkins service crashes, affecting all project build and deployment operations.

First, let’s go through the process of deploying the service to containers using Ansible, as shown below:

[root@ansible ansible]# cat /tmp/dev-test.yaml 
---
- hosts: 192.168.176.160
  remote_user: root
  gather_facts: no
  vars:
     jar_name: "{{ workspace }}/fw-base-nop/target/fw-base-nop.jar"
     remote_dest: /data/base/nop

  tasks:
  - name: copy jar file
    copy: src={{ jar_name }} dest={{ remote_dest }}

  - name: stop container
    shell: docker stop base-nop

  - name: delete container
    shell: docker rm base-nop

  - name: delete image
    shell: docker rmi base/nop

  - name: build_image
    shell: "cd {{ remote_dest }}  && docker build -t base/nop ."

  - name: run container
    shell: docker run --network host -e TZ="Asia/Shanghai" -d -v /etc/localtime:/etc/localtime:ro -m 2G --cpu-shares 512 -v /data/logs/fw-base-nop:/data/logs/fw-base-nop --name base-nop base/nop

In the Jenkins command line configuration, change the previous “Send files or execute commands over ssh” to using “execute shell.”

Then, execute the command, and the result is as follows:

Explanation:

In the above example, we have rewritten the commands to be executed in Jenkins using the Ansible playbook. The steps of pulling code and configuring the code compilation remain the same.

This script has added two variables. The workspace variable is passed from outside and is the path of the job. The remote_dest variable specifies the directory on the remote server where the jar package and Dockerfile are stored.

As seen in the screenshots above, this playbook executed successfully. However, there are several prerequisites for the successful execution of this script:

  • The remote_dest path on the remote server must exist (it has been created for this project before). For new projects, you will still need to create it manually.

  • The Dockerfile must also exist on the target host in advance. Similarly, for new projects, it needs to be manually created.

If any task in the execution list fails, the entire playbook deployment will fail and exit. For example, if a newly added Jenkins project does not have a container started or an image built, or if an existing project’s container exits abnormally after running for a period of time and the “stop container” task also fails, the deployment will still fail. Therefore, this playbook is essentially not feasible.

If there are new Jenkins projects, you will need to modify the content of the playbook again, which is obviously a frequent and troublesome operation.

Therefore, based on the possible problems mentioned above, let’s further optimize this playbook as follows:

- hosts: "{{ target_host }}"
  remote_user: root
  gather_facts: False

  vars:
    jar_src: "{{ jar_file }}"
    dest_dict: "/data/{{ project_name }}/"

  tasks:
    - name: Check if the directory exists
      shell: ls {{ dest_dict }}
      register: dict_exist
      ignore_errors: true

    - name: Create relevant directories
      file: dest="{{ item }}" state=directory  mode=755
      with_items:
        - "{{ dest_dict }}"
        - /data/logs/{{ project_name }}
      when: dict_exist is failure

    - name: Copy jar package and Dockerfile to the target host
      copy: src={{ item }}  dest={{ dest_dict }}/
      with_items:
        - '{{ jar_file }}'
        - '/etc/ansible/Dockerfile'

    - name: Check if the container exists
      shell: "docker ps -a --filter name={{ project_name }} |grep -v COMMAND"
      ignore_errors: true
      register: container_exists

    - name: Check the container status
      shell: "docker ps -a --filter name={{ project_name }} --format '{{ '{{' }} .Status {{ '}}' }}'"
      ignore_errors: true
    - name: Check container status
      register: container_state
      when: container_exists.rc == 0
    
    - name: Stop container
      shell: "docker stop {{ project_name }}"
      when: "('Up' in container_state.stdout)"
      ignore_errors: true
    
    - name: Remove container
      shell: "docker rm {{ project_name }}"
      when: container_exists.rc == 0
      ignore_errors: true
    
    - name: Check if image exists
      command: "docker images --filter reference={{ project_name }}* --format '{{ '{{' }} .Repository {{ '}}' }}:{{ '{{' }}.Tag {{ '}}' }}'"
      register: image_exists
      ignore_errors: true
    
    - name: Delete image
      shell: "docker rmi -f {{ item }}"
      loop: "{{ image_exists.stdout_lines }}"
      ignore_errors: true
      when: image_exists.rc == 0
    
    - name: Build image
      shell: "cd {{ dest_dict }}  && docker build -t {{ image_name }} --build-arg project_name={{ project_name }} ."
    
    - name: Start container
      shell: 'docker run {{ run_option }} --name {{ project_name }} {{ image_name }}'
            

**Description**

The hosts specified in `ansible_hosts` need to exist in the inventory file. You can provide either the host IP, host group name, or pass it as a variable.

In the playbook under the `vars` parameter, two variables are defined: the path where the jar package is located and the directory where the jar package and Dockerfile are copied to the remote server. To differentiate each project, a directory is created for each project based on the project name.

The task "**Check if container exists**" defines a variable `container_exists`. If the value is 0, it means that there was output when executing the command above (i.e., the container exists, whether it is in a running or stopped state); if it is non-zero, it means there was no output when executing the command, indicating that no container exists in this example.

The tasks "**Check container status**" and "**Check image status**" have similar syntax, but there is a slight difference. When executed in Ansible playbook, `{{ }}` is escaped by default. Therefore, special characters need to be handled accordingly, as shown in the example above. If you are not familiar with this, you can refer to the "Escaping" section in the earlier Ansible Basics chapter.

In the "**Build image**" task list, the build is performed based on the project name passed as a parameter. The image name is also customized based on the project name and `git short id` (as shown below).

When executing in Jenkins, you need to pass the following parameters:

```bash
#!/bin/bash

# Get the git commit's short id
git_id=`git rev-parse --short HEAD`

# Define the project name
project_name=fw-base-nop

# Define the image
image_name="${project_name}:${git_id}"

# Find the jar package path
cd ${WORKSPACE}/ && jar_file=`find "$(pwd)" ./ -name ${project_name}.jar |head -1`

# Define the runtime parameters for the container
container_run_option="--network host -e TZ="Asia/Shanghai" -d -v /etc/localtime:/etc/localtime:ro -m 2G --cpu-shares 512 -v /data/logs/${project_name}:/data/logs/${project_name}"

# Execute the playbook with parameters
ansible-playbook -i /etc/ansible/hosts /root/dev-deploy.yaml -e "{'jar_file':${jar_file},'project_name':'${project_name}','image_name':'${image_name}' ,'run_option':'$container_run_option','target_host':'192.168.176.160'}"

Note:

You only need to modify the project name when adding a new project. If you have specific requirements for the container’s runtime parameters, you can also modify them.

Since my project name is the same as the jar package generated by Maven, I did not define the jar package name here. You can customize it according to your needs.

The definition of the target host in this script uses a variable to pass an IP, but you can also pass a host group name, which allows you to deploy services on multiple hosts.

The purpose of this document is to demonstrate the process of continuous delivery and deployment, so the playbook uses many shell modules for implementation. The container operation task list can also be implemented using the docker_containers parameter. If you are interested, you can try it yourself. I will not demonstrate it here.

Since variables are used during image building, the Dockerfile also needs to be modified accordingly.

  FROM fabric8/java-alpine-openjdk8-jre
  ARG project_name 
  ENV JAVA_APP_JAR ${project_name}.jar

  COPY $JAVA_APP_JAR /deployments/
  CMD java -jar /deployments/$JAVA_APP_JAR

Using the optimized playbook, you can meet the requirements for service deployment in a microservices architecture. However, there are still some limitations.

The Dockerfile has not been modified for services that use different images. You can achieve this by setting variables.

For services with project names different from the jar package name, no preprocessing has been done. Adding a variable can solve this problem too.

If you don’t want to define the values of these variables in the “Exec shell” input box, you can define the values of these variables through the “Parameterize the build process” option. Alternatively, you can place some parameters that can automatically obtain values in the playbook file, leaving only the parameters that need to be manually entered.

By solving the two initial issues mentioned, let’s move on to the third issue: building and packaging the Jenkins service. When building multiple projects simultaneously, it is inevitable that Jenkins may hang or the service may crash (I’ve encountered this before). In that case, adding a slave node is a solution.

Jenkins slave nodes can be deployed on virtual machines or in containers. They can also be dynamically generated through configuration. I have covered how to add and use slave nodes in a previous article, and it is relatively simple. Just remember to install the necessary tools on the slave node.

This concludes the content of this section. To summarize, I have demonstrated how to deploy services to a VM with Tomcat and a Docker container. The deployment to a container orchestration system will be covered at the end of the course.