17Using Kubernetes Plugin for Continuous Deployment of Services to Kubernetes Clusters #
Another use of Kubernetes series plugins is to continuously deploy pre-compiled code to a Kubernetes cluster. The main plugins involved in this process are:
- Kubernetes CLI
- Kubernetes Continuous Deploy (kubernetes cd)
Let’s take a closer look at these two plugins.
Kubernetes CLI #
The main function of this plugin is to interact with the Kubernetes cluster using the kubectl
command during the execution of pipeline scripts with the credentials configured in the Jenkins system.
The core method of this plugin is withKubeConfig()
, which contains the following parameters (the first two are required):
credentialsId
: The ID of the credential. This refers to the credential created when configuring the connection to the Kubernetes cluster in Jenkins.serverUrl
: The address of the Kubernetes API server. This is the address configured for Kubernetes integration in Jenkins.caCertificate
: The certificate used to authenticate the API server. If not provided, the verification will be skipped. This parameter is optional.clusterName
: The name of the generated cluster configuration. The default value isk8s
.namespace
: The default namespace for the context.contextName
: The name of the generated context configuration. The default value isk8s
.
Now that we understand the parameters mentioned above, let’s take a look at the syntax of withKubeConfig()
:
node {
stage('List pods') {
withKubeConfig([credentialsId: '<credential-id>',
caCertificate: '<ca-certificate>',
serverUrl: '<api-server-address>',
contextName: '<context-name>',
clusterName: '<cluster-name>',
namespace: '<namespace>'
]) {
sh 'kubectl get pods'
}
}
}
You can also generate the corresponding syntax fragment using the snippet generator by selecting withKubeConfig: Setup Kubernetes CLI
. However, I won’t demonstrate it here. Feel free to try it on your own if you’re interested.
Since the additional parameters of this method are less frequently used, I won’t provide a detailed explanation on how to use them. Instead, I will mainly focus on the usage of the two required parameters.
Let’s demonstrate with an example:
node {
stage('Get nodes') {
withKubeConfig([credentialsId: 'afffefbf-9216-41dc-b803-7791cdb87134', serverUrl: 'https://192.168.176.156:6443']) {
sh 'kubectl get nodes'
}
}
}
Note:
To use this plugin, make sure that the agent executing the pipeline has the kubectl
binary executable and the appropriate permissions. If you encounter an error like the following while running the pipeline:
java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
......
Caused: java.io.IOException: Cannot run program "kubectl": error=2, No such file or directory
......
Finished: FAILURE
This error indicates that either the kubectl
command is not available or it cannot be found in the global environment variables.
To resolve this issue, modify the export PATH=
parameter in the /etc/profile
file by adding the path where the commonly used executable files are located. Alternatively, you can place the kubectl
binary executable in the specified path. For example:
export PATH=/root/local/bin:/usr/bin:/usr/sbin:$PATH
This indicates that kubectl
can be found in those paths. After modifying the file, execute the command source /etc/profile
to make the changes take effect.
In Jenkins, it is not common to directly interact with the Kubernetes cluster using kubectl
. Most scenarios involve deploying code using defined resource object files and this command.
Using the kubectl
Container
#
Another way to use the kubectl
command is by utilizing a kubectl
container. This involves customizing a small-sized image that can interact with the Kubernetes cluster, and then using this image for deployment each time. Here is an example Dockerfile:
FROM alpine
USER root
COPY kubectl /usr/bin/kubectl
RUN chmod +x /usr/bin/kubectl && mkdir -p /root/.kube/
COPY config /root/.kube/config
Note:
- The
kubectl
file is the binary executable file corresponding to the cluster version. - The
config
file is the authentication configuration file for interacting with the Kubernetes cluster using thekubectl
client command. The default location is/root/.kube/config
.
Build the image based on this Dockerfile:
docker build -t alpine-kubectl:1.14 -f dockerfile .
In this case, I tagged the image as 1.14
since my Kubernetes cluster version is 1.14.
After building the image, you can verify its effectiveness with the following command:
$ docker run -it --rm alpine-kubectl:1.14 kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.176.151 Ready <none> 171d v1.14.1
192.168.176.152 Ready <none> 171d v1.14.1
......
If the command is successfully executed, it means that the custom image has been built successfully, and you can now use this image to interact with the Kubernetes cluster.
Here is an example of how to use this image:
podTemplate(cloud: 'kubernetes', namespace: 'default', label: 'TEST-CD',
containers: [
containerTemplate(
name: 'test-cd',
image: "192.168.176.155/library/alpine-kubectl:1.14",
ttyEnabled: true,
privileged: true,
alwaysPullImage: false)
],
){
node('TEST-CD'){
stage('Test CD'){
container('test-cd'){
sh 'kubectl get nodes'
}
}
}
}
Please note that this is just a reference example. In real-world scenarios, you can define how to achieve continuous deployment to the cluster based on specific requirements.
Kubernetes Continuous Deploy #
Abbreviated as kubernetes cd
, this plugin is used to deploy kubernetes resource objects to a kubernetes cluster.
It provides the following features:
- Fetches cluster credentials from the master node via SSH, or can be manually configured.
- Variable substitution for resource configurations, allowing for dynamic resource deployment.
- Manages login credentials for private Docker registries.
- Does not require installing the
kubectl
tool on the Jenkins agent node.
The supported kubernetes resource objects are as follows:
The following list shows the supported resource object types and the supported extension groups.
- ConfigMap (v1)
- Daemon Set (apps/v1, extensions/v1beta1, apps/v1beta2)
- Deployment (apps/v1, apps/v1beta1, extensions/v1beta1, apps/v1beta2)
- Ingress (extensions/v1beta1, networking.k8s.io/v1beta1)
- Job (batch/v1)
- Namespace (v1)
- Pod (v1)
- Replica Set (apps/v1, extensions/v1beta1, apps/v1beta2)
- Replication Controller (v1) - Does not support rolling updates. If needed, please use Deployment.
- Secret (v1) - Secret configurations are also supported.
- Service (v1)
- Stateful Set (apps/v1, apps/v1beta1, apps/v1beta2) apps/v1
- Cron Job (batch/v1beta1, batch/v2alpha1)
- Horizontal Pod Autoscaler (autoscaling/v1, autoscaling/v2beta1, autoscaling/v2beta2)
- Network Policy (networking.k8s.io/v1)
- Persistent Volume (v1)
- Persistent Volume Claim (v1)
- ClusterRole (rbac.authorization.k8s.io/v1)
- ClusterRoleBinding (rbac.authorization.k8s.io/v1)
- Role (rbac.authorization.k8s.io/v1)
- RoleBinding (rbac.authorization.k8s.io/v1)
- ServiceAccount (v1)
Those familiar with kubernetes should know what these resource objects are used for. For continuous deployment of application code using this plugin, understanding Deployment, Service, ConfigMap, and Ingress resource objects is sufficient. Almost all continuous deployments use Deployment as the resource object, and other types of resource objects are generally not operated indirectly through Jenkins. This series of articles is not a Kubernetes-related course, so we won’t go into detail about the listed resource objects.
To interact with the kubernetes cluster using the cd plugin, a credential needs to be added. Jenkins can use this credential to have permission to operate on the kubernetes cluster through the kubectl
command. Unlike the credentials created when integrating Jenkins with Kubernetes, this credential is of type Kubernetes configuration (kubeconfig)
, which is the kubeconfig file used directly when interacting with the kubernetes cluster through the kubectl
command (default path is /root/.kube/config
), without generating the corresponding authentication certificate based on the file content.
There are multiple ways to create a credential using the kubeconfig file in Jenkins:
- Directly entering the kubeconfig content.
- Setting the kubeconfig path on the Jenkins master.
- Fetching the kubeconfig file from a remote SSH server.
Here is an example of creating the credential by directly using the kubeconfig file content.
Click “Credentials” -> “System” -> “Global Credentials” -> “Add Credentials”, as shown in the following image:
Simply paste the content of the kubeconfig file into the “Content” box, and then set an ID name. When writing the pipeline script, use the ID name of this credential for reference. Save all the configurations when finished.
Now that the credential has been created, you can use this plugin. The kubernetes cd
plugin can be configured and used in both normal types of Jenkins jobs (such as Freestyle projects and Maven projects) and in pipeline scripts.
In a normal type of job (e.g., Freestyle project), when configuring the job, select “Add build step” and choose “Deploy to kubernetes” from the list of options.
For example, in a Freestyle project, I configure to create a namespace in the Kubernetes cluster as shown below:
Note:
kubeconfig: Select the credential created above.
Config Files
: Represents the defined Kubernetes resource object files. The default path here is ${WORKSPACE}
of the Jenkins project. If there are multiple resource object files, they should be separated by commas (,
). Note that the path of this file does not support absolute paths, such as /root/ns.yaml
, but it supports relative paths based on the workspace, such as job/ns.yaml
(in this case, the file path is ${WORKSPACE}/job/ns.yaml
). For other related ways to write this file path, please refer to here.
ns.yaml: It is a namespace resource object file, and its content is relatively simple, as shown below:
apiVersion: v1
kind: Namespace
metadata:
name: test-deploy-plugin
I won’t paste the execution results here. If you want to see the effect, you can try it yourself.
When using the cd
plugin in a pipeline job, you need to write a pipeline script. Before you start, you need to understand the syntax of the plugin.
The kubernetes-cd
plugin provides the kubernetesDeploy
method to support the writing of pipeline scripts. You can generate the syntax snippet of this method using the kubernetesDeploy: Deploy to Kubernetes
snippet generator, as shown below:
Beautify the syntax:
kubernetesDeploy(kubeconfigId: 'k8s_auth', // Required
configs: 'xx-deploy.yaml,xx-service.yaml,xx-ingress.yaml', // Required
enableConfigSubstitution: false,
secretNamespace: '<secret-namespace>',
secretName: '<secret-name>',
dockerCredentials: [
[credentialsId: '<credentials-id-for-docker-hub>'],
[credentialsId: 'auth_harbor', url: 'http://192.168.176.154'],
]
)
Explanation:
kubeconfigId
: The ID of the kubeconfig credentials stored in Jenkins. Required.configs
: The related resource object files, separated by commas. Required.enableConfigSubstitution
: Enable variable substitution in the configuration. Optional.secretNamespace
: The namespace where the secret for authenticating the Docker private registry is located. It is usually used to pull images from the private registry. Optional.secretName
: The name of the secret for authenticating the Docker private registry. The content of this secret includes the address information of the private registry and the username and password for logging into the private registry.
When deploying pods using the resource object deployment
, you usually need to pull the image of the application service from the Docker private registry. For authenticated image repositories, there are two ways to authenticate when pulling the application image: one is that the host machine where the pod is deployed has already been authenticated, so you don’t need to define additional things in the existing resource object files to directly pull the image; the other is that the host machine has not been authenticated, so you need to define the imagepullsecret
parameter in the resource object pod
. When pulling the image, the secret specified by this parameter will be used to authenticate the repository address and pull the image. But one thing to note is that whether the host machine has been authenticated or not, you need to use the insecure-registries
parameter in the Docker configuration file on the host machine to specify the address of the private registry.
If this parameter exists, the name will be displayed as the environment variable name using KUBERNETES_SECRET_NAME
, and when referencing it in the resource object deployment
, you can write the variable name as shown below:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-k8s-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: sample-k8s-app
spec:
containers:
- name: sample-k8s-app-container
image: <username or registry URL>/<image_name>:<tag(maybe $BUILD_NUMBER)>
ports:
- containerPort: 8080
imagePullSecrets:
- name: $KUBERNETES_SECRET_NAME
As for how to create this secret, it will be demonstrated in the practice section below.
dockerCredentials
: The method for Docker private registry authentication, usually used to pull images from the private registry. Optional.
That’s it for the usage of the Kubernetes plugin for continuous deployment to the Kubernetes cluster. Next, a simple practice will be performed based on these two plugins.
Using the kubernetesDeploy Plugin #
Based on the previous section on continuous delivery, add a new stage.
stage('deploy to k8s') {
sh """
cd /tmp/k8s_yaml/${project_name}/
sed -i 's#fw-base-nop:.*-*[0-9]\$#fw-base-nop:${imageTag}-${BUILD_NUMBER}#' deployment.yaml
cp * ${WORKSPACE}/
"""
kubernetesDeploy (
kubeconfigId:"k8s_auth",
configs: "*.yaml",
)
}
Explanation:
This pipeline script uses shell commands to modify the value of the image
parameter in the deployment resource object to the name of the newly built image. It then copies all resource objects to the workspace directory. When the pipeline is executed, it will automatically find and match the yaml files.
The above is a basic example aimed at demonstrating the process. In actual work, you can put these resource object files in a folder specified by the source code repository, such as the kubernetes-deploy
directory in the root path of the project. The pipeline script above can be modified as follows:
stage('deploy to k8s') {
sh """
sed -i 's#fw-base-nop:.*-*[0-9]\$#fw-base-nop:${imageTag}-${BUILD_NUMBER}#' kubernetes-deploy/deployment-${project_name}.yaml
cp * ${WORKSPACE}/
"""
kubernetesDeploy (
kubeconfigId:"k8s_auth",
configs: "kubernetes-deploy/*.yaml",
)
}
Note:
The kubernetes-cd plugin automatically parses kubernetes resource object files by default. If there are any syntax errors, an error will be reported and the execution will exit.
configs: "*.yaml"
can specify either a single file or multiple files. The default directory is the job’s workspace directory.
When defining resource object files, pay attention to the deployment.yml
file. If it includes rolling update configurations (so far, only this part needs to be modified), maxSurge
and maxUnavailable
should be written as percentages.
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0%
If they are written as integers, the Jenkins plugin will report the following error during recognition:
In the previous sections, error handling was mentioned. In a complete pipeline script, the steps that are likely to encounter errors generally occur during the code compilation process. Therefore, an exception handling operation needs to be added at this point as well. The try/catch/finally
syntax is still used for the script syntax, as shown below:
try {
container('jnlp') {
stage('clone code') {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
script {
imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
}
echo "${imageTag}"
}
stage('Build a Maven project') {
ecoho "${project_name}"
sh "cd ${project_name} && mvn clean install -DskipTests -Pproduct -U"
currentBuild.result = 'SUCCESS'
}
}
} catch (e) {
currentBuild.result = 'FAILURE'
}
Explanation
The content of the try
block can be one or more stages. Regardless of which stage is included, the catch
block should immediately follow the try
block.
In the example shown here, the stages included in the try
block are the steps of pulling and compiling code. You can place the try
block at any step where you want to catch exceptions, and modify it according to your actual situation.
It should be noted that the catch
block can usually catch exceptions of the type where a command execution fails or a plugin method is improperly configured. It will not catch exceptions returned by Jenkins internally (such as when a certain property does not exist or a certain instruction is not applicable to the pipeline script). In this case, even if the job fails, the email sending operation will not be executed if a job execution status check was performed before.
Here is the complete code example:
def project_name = 'fw-base-nop' // Project name
def registry_url = '192.168.176.155' // Image registry address
podTemplate(cloud: 'kubernetes', namespace: 'default', label: 'pre-CICD',
serviceAccount: 'default', containers: [
containerTemplate(
name: 'jnlp',
...
Note: The translation of the remaining code is incomplete.
image: "192.168.176.155/library/jenkins-slave:latest",
args: '${computer.jnlpmac} ${computer.name}',
ttyEnabled: true,
privileged: true,
alwaysPullImage: false,
),
containerTemplate(
name: 'sonar',
image: "192.168.176.155/library/jenkins-slave:sonar",
ttyEnabled: true,
privileged: true,
command: 'cat',
alwaysPullImage: false,
),
],
volumes: [
nfsVolume(mountPath: '/tmp', serverAddress: '192.168.177.43', serverPath: '/data/nfs', readOnly: false),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
nfsVolume(mountPath: '/root/.m2', serverAddress: '192.168.177.43', serverPath: '/data/nfs/.m2', readOnly: false),
],
){
node('pre-CICD') {
stage('build') {
try{
container('jnlp'){
stage('clone code'){
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
script {
imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
}
echo "${imageTag}"
}
stage('Build a Maven project') {
sh "cd ${project_name} && mvna clean install -DskipTests -Pproduct -U"
currentBuild.result = 'SUCCESS'
}
}
}catch(e){
currentBuild.result = 'FAILURE'
}
if (currentBuild.result == null || currentBuild.result == 'SUCCESS') {
container('sonar'){
stage('sonar test'){
withSonarQubeEnv(credentialsId: 'sonarqube') {
sh "sonar-scanner -X "+
"-Dsonar.login=admin " +
"-Dsonar.language=java " +
"-Dsonar.projectKey=${JOB_NAME} " +
"-Dsonar.projectName=${JOB_NAME} " +
"-Dsonar.projectVersion=${BUILD_NUMBER} " +
"-Dsonar.sources=${WORKSPACE}/fw-base-nop " +
"-Dsonar.sourceEncoding=UTF-8 " +
"-Dsonar.java.binaries=${WORKSPACE}/fw-base-nop/target/classes " +
"-Dsonar.password=admin "
}
}
}
withDockerRegistry(credentialsId: 'auth_harbor', url: 'http://192.168.176.155') {
stage('build and push docker image') {
sh "cp /tmp/Dockerfile ${project_name}/target/"
def customImage = docker.build("${registry_url}/base/${project_name}-${imageTag}:${BUILD_NUMBER}","--build-arg jar_name=${project_name}.jar ${project_name}/target/")
echo "推送镜像"
customImage.push()
}
stage('delete image') {
echo "删除本地镜像"
sh "docker rmi -f ${registry_url}/base/${project_name}:${imageTag}-${BUILD_NUMBER}"
}
}
}else {
echo "---currentBuild.result is:${currentBuild.result}"
emailext (
subject: "'${env.JOB_NAME} [${env.BUILD_NUMBER}]' 构建异常",
body: """
详情:\n
failure: Job ${env.JOB_NAME} [${env.BUILD_NUMBER}] \n
状态:${env.JOB_NAME} jenkins 构建异常 \n
URL :${env.BUILD_URL} \n
项目名称 :${env.JOB_NAME} \n
项目构建id:${env.BUILD_NUMBER} \n
信息: 代码编译失败
""",
to: "[[email protected]](/cdn-cgi/l/email-protection)",
recipientProviders: [[$class: 'DevelopersRecipientProvider']]
)
}
stage('deploy'){
if (currentBuild.result == null || currentBuild.result == 'SUCCESS') {
sh """
cd /tmp/k8s_yaml/${project_group}/${project_name}
sed -i 's#fw-base-nop:.*-*[0-9]\$#fw-base-nop:${imageTag}-${BUILD_NUMBER}#' deployment.yaml
cp * ${WORKSPACE}/
"""
kubernetesDeploy (
kubeconfigId:"k8s_auth",
configs: "*.yaml",
)
}
}
}
}
}
imagePullSecrets #
imagepullsecrets
is used to authenticate with a Docker private registry when pulling container images after creating a resource object.
In the Kubernetes plugin series, only the imagepullsecrets
is mentioned in the kubernetes-plugin
plugin’s configuration of the PodTemplate method. However, in this case, the secret is only used to pull the container image for executing the pipeline script, and it cannot be used to authenticate with a private registry when deploying application containers in a Kubernetes cluster.
As mentioned earlier, there are two ways to authenticate when pulling application images. One way is when the host has already authenticated with the private registry, in which case the image can be deployed directly. The other way is when the host has not been authenticated, and in this case, the imagepullsecret
parameter needs to be defined in the resource object’s pod. This way, the image will be authenticated with the username and password specified in the secret and pulled from the registry.
So how is this secret created? It’s quite simple, with just one command:
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
Where:
DOCKER_REGISTRY_SERVER
is the address of the Docker private registry service.DOCKER_USER
is the username used to authenticate with the Docker private registry.DOCKER_PASSWORD
is the password used to authenticate with the Docker private registry.- By default, if the
-n
parameter is not used, the secret is created in thedefault
namespace. So if you want to deploy the pod in a specific namespace, you also need to create the secret in that namespace.
Once the secret is created, edit the resource object file (usually deployment.yaml
) and add the following under template.spec
:
imagePullSecrets:
- name: <secret_name>
This will automatically authenticate with the private registry when deploying the pod, regardless of whether the host has authenticated with the private registry. However, this requires creating the imagepullsecret
in each namespace where the application container needs to be deployed. Although it is possible to do this in practice, it is not a very good solution if there are a large number of projects and namespaces. Fortunately, Kubedeploy provides a simpler way to achieve this by using the dockerCredentials
parameter in the kubernetesDeploy
method, as explained in more detail below.
The dockerCredentials
parameter of the kubernetesDeploy
method is used to authenticate with the Docker private registry. If the secretName
parameter is not specified, a unique secret name will be generated and referenced by the resource object using the environment variable KUBERNETES_SECRET_NAME
. If a secretName
is set, the secret will be updated. Then, by enabling variable substitution, it can be referenced by pods created in the namespace specified by secretNamespace
.
The specific configuration is as follows:
kubernetesDeploy (
kubeconfigId:"k8s_auth",
configs: "*.yaml",
enableConfigSubstitution: true,
secretNamespace: 'base',
secretName: 'test-mysecret',
dockerCredentials: [
[credentialsId: 'auth_harbor', url: 'http://192.168.176.155'],
]
)
Where:
secretName
should preferably not be the same as any existing secret names in the specified namespace.- In
dockerCredentials
,credentialsId
is the ID of the credentials created for authentication with the private registry, andurl
is the URL of the private registry.
The imagepullsecret
configuration in the resource object file should now be:
imagePullSecrets:
- name: $KUBERNETES_SECRET_NAME
After configuring this, when parsing the resource object file, kubernetesDeploy
will automatically replace the KUBERNETES_SECRET_NAME
variable with the value defined in the secretName
parameter of the kubernetesDeploy
method and deploy the pod. Therefore, there is no need to separately configure the image registry authentication for container deployment in any namespace. It is only necessary to set the value of the secretNamespace
parameter in the kubernetesDeploy
method, without having to create a secret in each namespace.
Here is a complete stage example:
stage('deploy'){
if (currentBuild.result == null || currentBuild.result == 'SUCCESS') {
sh """
cd /tmp/k8s_yaml/${project_group}/${project_name}
sed -i 's#fw-base-nop:.*-*[0-9]\$#fw-base-nop:${imageTag}-${BUILD_NUMBER}#' deployment.yaml
cp * ${WORKSPACE}/
"""
kubernetesDeploy (
kubeconfigId:"k8s_auth",
configs: "*.yaml",
enableConfigSubstitution: true,
secretNamespace: 'base',
secretName: 'test-mysecret',
dockerCredentials: [
[credentialsId: 'auth_harbor', url: 'http://192.168.176.155'],
]
)
}
}
During the execution, you can see in the Jenkins console that the value is being replaced, as shown below:
Using the Kubernetes CLI Plugin #
The usage of this plugin has already been introduced above. In this example, we directly define a new image configuration (modify according to your actual situation). In the continuous deployment step, we use the withKubeConfig
method with the previously created kubectl
container to update the service directly using the apply
command.
Now let’s take a look at the pipeline script.
def project_name = 'fw-base-nop' //项目名称 (Project name)
def registry_url = '192.168.176.155' //镜像仓库地址 (Image repository address)
def project_group = 'base'
podTemplate(cloud: 'kubernetes', namespace: 'default', label: 'pre-CICD',
serviceAccount: 'default', containers: [
containerTemplate(
name: 'jnlp',
image: "192.168.176.155/library/jenkins-slave:sonar",
args: '${computer.jnlpmac} ${computer.name}',
ttyEnabled: true,
privileged: true,
alwaysPullImage: false,
),
containerTemplate(
name: 'kubectl',
image: "192.168.176.155/library/alpine-kubectl:1.14",
ttyEnabled: true,
privileged: true,
command: 'cat',
alwaysPullImage: false,
),
],
volumes: [
nfsVolume(mountPath: '/tmp', serverAddress: '192.168.177.43', serverPath: '/data/nfs', readOnly: false),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
nfsVolume(mountPath: '/root/.m2', serverAddress: '192.168.177.43', serverPath: '/data/nfs/.m2', readOnly: false),
],
){
node('pre-CICD') {
stage('build') {
try{
container('jnlp'){
stage('clone code'){
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, userRemoteConfigs: [[credentialsId: 'c33d60bd-67c6-4182-b52c-d7aeebfab772', url: 'http://192.168.176.154/root/base-nop.git']]])
script {
imageTag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
}
echo "${imageTag}"
currentBuild.result == 'SUCCESS'
}
stage('Build a Maven project') {
sh "cd ${project_name} && mvn clean install -DskipTests -Pproduct -U"
currentBuild.result = 'SUCCESS'
}
}
}catch(e){
currentBuild.result = 'FAILURE'
}
if (currentBuild.result == null || currentBuild.result == 'SUCCESS') {
container('jnlp'){
stage('sonar test'){
withSonarQubeEnv(credentialsId: 'sonarqube') {
sh "sonar-scanner -X "+
"-Dsonar.login=admin " +
"-Dsonar.language=java " +
"-Dsonar.projectKey=${JOB_NAME} " +
"-Dsonar.projectName=${JOB_NAME} " +
"-Dsonar.projectVersion=${BUILD_NUMBER} " +
"-Dsonar.sources=${WORKSPACE}/fw-base-nop " +
"-Dsonar.sourceEncoding=UTF-8 " +
"-Dsonar.java.binaries=${WORKSPACE}/fw-base-nop/target/classes " +
"-Dsonar.password=admin "
}
}
}
withDockerRegistry(credentialsId: 'auth_harbor', url: 'http://192.168.176.155') {
stage('build and push docker image') {
sh "cp /tmp/Dockerfile ${project_name}/target/"
def customImage = docker.build("${registry_url}/library/${project_name}:${imageTag}-${BUILD_NUMBER}","--build-arg jar_name=${project_name}.jar ${project_name}/target/")
echo "Pushing image"
customImage.push()
}
stage('delete image') {
echo "Deleting local image"
sh "docker rmi -f ${registry_url}/library/${project_name}:${imageTag}-${BUILD_NUMBER}"
}
}
}else {
echo "---currentBuild.result is:${currentBuild.result}"
emailext (
subject: "'${env.JOB_NAME} [${env.BUILD_NUMBER}]' Build Exception",
body: """
Details:<br>
Failure: Job ${env.JOB_NAME} [${env.BUILD_NUMBER}] <br>
Status: ${env.JOB_NAME} Jenkins build exception <br>
URL : ${env.BUILD_URL} <br>
Project name : ${env.JOB_NAME} <br>
Project build id : ${env.BUILD_NUMBER} <br>
Information: Code compilation failed
""",
to: "[[email protected]](/cdn-cgi/l/email-protection)",
recipientProviders: [[$class: 'DevelopersRecipientProvider']]
)
}
stage('deploy') {
if (currentBuild.result == null || currentBuild.result == 'SUCCESS') {
container('kubectl'){
withKubeConfig(credentialsId: 'afffefbf-9216-41dc-b803-7791cdb87134', serverUrl:'https://192.168.176.156:6443') {
sh """
cd /tmp/k8s_yaml/${project_group}/${project_name}
sed -i 's#fw-base-nop:.*-*[0-9]\$#${project_name}:${imageTag}-${BUILD_NUMBER}#' deployment.yaml
cp * ${WORKSPACE}/
kubectl apply -f .
"""
}
}
}
}
}
}
The above script can still be placed in the source code repository of the application and pulled from the Pipeline Script from SCM
configuration to execute the Jenkinsfile by pulling it from the code repository. This was explained in the previous sections, so I won’t repeat it here. Choose the deployment method according to the actual situation.
At the end of the article, here’s a reference definition of a deployment resource object:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: fw-base-nop
namespace: base
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0%
template:
metadata:
labels:
name: fw-base-nop
spec:
terminationGracePeriodSeconds: 60
containers:
- name: fw-base-nop
image: 192.168.176.155/base/fw-base-nop:bc29b5e-37
livenessProbe:
httpGet:
path: /monitor/info
port: 18082
scheme: HTTP
initialDelaySeconds: 300
timeoutSeconds: 10
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /monitor/info
port: 18082
scheme: HTTP
initialDelaySeconds: 300
timeoutSeconds: 10
successThreshold: 1
failureThreshold: 5
env:
- name: TZ
value: "Asia/Shanghai"
- name: JAVA_OPTS
value: "-Xms1228m -Xmx1228m"
- name: _JAVA_OPTIONS
value: "-Xms1228m -Xmx1228m"
ports:
- containerPort: 8082
resources:
limits:
cpu: 1024m
memory: 2048Mi
requests:
cpu: 1024m
memory: 2048Mi
volumeMounts:
- name: fw-base-nop-logs
mountPath: /data/logs/fw-base-nop
- name: host-resolv
mountPath: /etc/resolv.conf
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: host-resolv
hostPath:
path: /etc/resolv.conf
- name: tz-config
hostPath:
path: /etc/localtime
- name: fw-base-nop-logs
emptyDir: {}
This concludes the section on continuous deployment using the Kubernetes plugin. The content in this section only provides a possible solution. When working on actual projects, further research should be conducted to determine the best approach for implementing continuous deployment operations.