14 Setting Up Jenkins Connections to Kubernetes Clusters in Different Environments

14Setting Up Jenkins Connections to Kubernetes Clusters in Different Environments #

In previous chapters, we briefly introduced some basic syntax for integrating Jenkins with Docker and configuring continuous delivery and deployment. In addition to integrating with Docker, Jenkins can also be integrated with the container orchestration tool Kubernetes.

The Kubernetes plugin in Jenkins is mainly used to accomplish two tasks: first, it dynamically generates a pod as a Jenkins slave node within the Kubernetes cluster, providing a working environment for pipeline execution; second, it continuously deploys application code to the Kubernetes cluster.

Based on the two use cases mentioned above, the remaining chapters will provide a detailed introduction to the configuration and usage of Jenkins and Kubernetes integration.

Similar to the Docker pipeline, integrating Kubernetes with Jenkins relies on plugins. Therefore, before discussing how to configure and use Jenkins to integrate with Kubernetes, we need to install the following plugins:

  • Kubernetes plugin
  • Kubernetes CLI
  • Kubernetes Continuous Deploy

The first plugin listed above is used to generate a Jenkins slave node dynamically within the Kubernetes cluster, while the latter two plugins are used for continuous deployment of code to the Kubernetes cluster using different methods. Regardless of which plugin is used, it is important to ensure that Jenkins can connect to the Kubernetes cluster. Therefore, this chapter starts with the configuration of Jenkins connecting to Kubernetes in different environments.

Configuring Jenkins to Connect to Kubernetes #

When using the Docker pipeline for continuous delivery, by default, the Docker process on the host machine is used. In contrast, the integration of Jenkins and Kubernetes mainly works by calling the Kubernetes API to interact with the Kubernetes cluster. Most companies use certificates when installing and configuring the Kubernetes API server, so when configuring Jenkins to connect to the Kubernetes cluster, a series of certificates and keys need to be generated based on the Kubernetes configuration file and uploaded to Jenkins for authentication with the API server.

Next, we will provide a detailed introduction to the configuration of connecting Jenkins to a Kubernetes cluster in two different scenarios: deploying Jenkins in a Kubernetes cluster environment and connecting Jenkins to a Kubernetes cluster in a non-Kubernetes cluster environment.

Configure Jenkins to Connect to a Kubernetes Cluster Deployed Outside of Kubernetes #

After installing the plugin, go to the Jenkins homepage and click on the menu “Manage Jenkins” -> “Configure System”. On the resulting page, scroll down to the bottom and click on “Add a new cloud” -> “Kubernetes”.

The following screenshot shows the configuration options:

Details:

Name: Enter the name for the cloud. The default is “kubernetes”, but you can customize it. This name will be used when writing pipelines.

Kubernetes URL: Enter the address of the Kubernetes cluster. If you have a multi-master cluster with high availability, enter the VIP address and port. If you have a single-master environment, enter the master address and port.

Kubernetes Service Certificate Key: Enter the certificate content used for authentication with the Kubernetes cluster.

Kubernetes Namespace: Enter the namespace where the generated pod works when interacting with Kubernetes.

Credentials: Enter the credentials used to connect to Kubernetes.

Jenkins URL: Enter the address of Jenkins.

Now that you understand the basic configuration parameters, let’s focus on the “Kubernetes Service Certificate Key” and “Credentials” configurations.

When installing the Kubernetes cluster, a series of certificates and keys are generated. These certificates and keys are used for authentication with the Kubernetes cluster. When configuring the kubectl client command, a kubeconfig file is generated, which is used for communication with the cluster. By default, this file is located at /root/.kube/config and has the highest level of operating privileges (if cluster-admin privileges are granted).

Jenkins needs to communicate with the cluster using the certificates generated by this kubeconfig file. Therefore, when configuring Jenkins to connect to a Kubernetes cluster in a production environment, it is important to consider the permissions of the user bound to the kubeconfig file. It is recommended to generate a new kubeconfig file with lower privileges instead of using the file used by the kubectl command.

For testing purposes, I will use the kubeconfig file used by the kubectl command. As for how to generate a kubeconfig file with lower privileges, I will not cover it here. Feel free to message me if you are interested.

Configure Certificate Key

First, let’s take a look at the config file.

[root@k8s_master1 ~]# cat /root/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVT0g4cDd6QXZaR3p4cGxUVy9xe......
WRnVRNm9IcjZ0Yk0wa1NJVkhvN2JNQjRWOGZoUWjlLS243ZTFsQWdaVWhyWGMzTzRqVS9xVHRWRQpSRXc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.176.156:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:

In the above config file, there is a kubeconfig file that was generated during the Kubernetes cluster installation. It contains information such as the cluster address, certificates, and users. Jenkins needs to communicate with the cluster using the certificates generated by this kubeconfig file. Therefore, in a production environment, it is recommended to generate a new kubeconfig file with lower privileges for Jenkins to use when connecting to the Kubernetes cluster.

- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQzVENDQXNXZ0F3SUJBZ0lVQmI1ZTJFaUk1WndSY1JVeFVyZ...
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBeDdjNkpRTFdQaC90REtjUDQrcDV5a...

Get the content of the certificate-authority-data field from the file and convert it to a base64-encoded file.

echo LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVT0g4cDd6QXZaR3p4cGxUVy9xejRWRmxLQl...
WRnVRNm9IcjZ0Yk0wa1NJVkhvN2JNQjRWOGZoUWk4WjlLS243ZTFsQWdaVWhyWGMzTzRqVS9xVHRWRQpSRXc9Ci0tLS0tRU5EIENF...
Cg== | base64 -d > ca.crt

Paste the content of ca.crt into the Kubernetes Server Certificate Key field as shown in the following image:

Image

Configure the credentials

Get the content of client-certificate-data and client-key-data fields from the file /root/.kube/config and convert them to base64-encoded files.

echo LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQzVENDQXNXZ0F3SUJBZ0lVQmI1ZTJFaUk1WndSY1JVeFVyZ... Qk9DTzRBcEVzWXNOa084UVF2RTlwVEhpNlE0LzhLV0NtU0wyNgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== | base64 -d > client.crt

// Generate the key
echo LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBeDdjNkpRTFdQaC90REtjUDQrc... K3dGd...Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== | base64 -d > client.key

Generate the Client P12 authentication file cert.pfx and download it to your local machine

openssl pkcs12 -export -out cert.pfx -inkey client.key -in client.crt -certfile ca.crt
Enter Export Password:                      // The password is self-defined and will be used when importing the certificate into Jenkins
Verifying - Enter Export Password:

Add credentials in the Jenkins credentials menu

Image

As shown in the image, the credential type is “Certificate” and the password is the one set during the certificate generation process.

After adding the credential, it should look like the following image. Click on Test Connection to verify the connection:

Image

With this, the configuration of Jenkins connecting to Kubernetes in a non-Kubernetes cluster is complete.

Configuring Jenkins Connection to Kubernetes Cluster #

In the previous section on Jenkins installation and configuration, we did not cover how to deploy Jenkins in a Kubernetes cluster. So in this section, we will explain how to deploy Jenkins in a Kubernetes cluster.

In the Kubernetes system, resource objects are used to describe the desired state of a system and the basic information of objects. The most commonly used resource object for deploying applications or services is the deployment. Therefore, we will use a deployment to describe the configuration of Jenkins and deploy it.

Deploying Jenkins in a Kubernetes Cluster

First, we need to create a service account to bind the operational permissions for a series of Kubernetes resource objects in a specific namespace.

Create a file named jenkins-rbac.yaml with the following content:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: kube-system

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
  - kind: ServiceAccount
    name: jenkins

Then, create a deployment resource object file (jenkins-deployment.yaml) to describe some basic configuration information for Jenkins, such as the exposed ports, startup parameters, and the image used.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins
  labels:
    app-name: jenkins
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app-name: jenkins
    spec:
      serviceAccount: "jenkins"
      containers:
      - name: jenkins
        image: docker.io/jenkins:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          name: web
          protocol: TCP
        - containerPort: 50000
          name: agent
          protocol: TCP
        volumeMounts:
        - name: jenkins-home
          mountPath: /var/jenkins_home
        env:
        - name: JAVA_OPTS
          value: "-Duser.timezone=Asia/Shanghai -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85"
      volumes:
      - name: jenkins-home
        persistentVolumeClaim:
          claimName: jenkins-public-pvc

In the definition of the resource object, we use PersistentVolume (PV) and PersistentVolumeClaim (PVC) for persistent volume storage, so we also need to create files defining these two resource objects.

Create a file named jenkins-pv.yaml with the following content:

kind: PersistentVolume
apiVersion: v1
metadata:
  labels:
    name: jenkins-public-pv
  name: jenkins-public-pv
  namespace: kube-system
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/nfs/jenkins-data
    server: 192.168.177.43

Create a file named jenkins-pvc.yaml with the following content:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jenkins-public-pvc
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  selector:
    matchLabels:
      name: jenkins-public-pv

Using NFS as shared storage, create PVC using PV. In practical work, it is generally recommended to use StorageClass instead of PV as dynamic storage. Of course, if simplicity is desired, you can also mount the NFS directory directly, or specify the node to mount the working directory to the host using HostPath. There are multiple methods available, so choose the appropriate storage method based on your specific needs.

Deploy the Service resource object, which acts as a proxy for the pod running the Jenkins service.

Here is the jenkins-service.yaml file:

kind: Service
apiVersion: v1
metadata:
  labels:
    app-name: jenkins
  name: jenkins
  namespace: kube-system
spec:
  ports:
  - port: 8080
    targetPort: 8080
    name: web
  - port: 50000
    targetPort: 50000
    name: agent
  selector:
    app-name: jenkins

Deploy the Ingress service, bypassing the Service to handle proxying for the pod.

Here is the jenkins-ingress.yaml file:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jenkins
  namespace: kube-system
spec:
  rules:
  - host: k8s-jenkins.com
    http:
      paths:
      - path: /
        backend:
          serviceName: jenkins
          servicePort: 8080

After creating the Ingress, add the IP address of the node where the ingress controller is deployed and the host specified in the Ingress to the local hosts file for resolution. If the Ingress service is not deployed, you can directly edit the type of the previously created Service to NodePort, so that Jenkins can be accessed through nodeIP:nodePort.

After deployment, the following should be shown:

$ kubectl get pods,ingress,svc,deploy -n kube-system |grep jenkins

pod/jenkins-78f658b5d6-25qgl                      1/1     Running   0          163d
ingress.extensions/jenkins              k8s-jenkins.com                 80      10m

service/jenkins                ClusterIP   10.254.179.172   <none>        8080/TCP,50000/TCP       198d

deployment.extensions/jenkins                      1/1     1            1           198d

Image

After the deployment, the rest of the configuration is the same as the installation and configuration of Jenkins in the previous chapters. This concludes the introduction of how to install the Jenkins service in a Kubernetes cluster.

Configure Jenkins to connect to Kubernetes

Configuring Jenkins to connect to Kubernetes in a Kubernetes cluster is relatively simple. Similarly, this configuration is done in the system configuration page and does not require generating the certificates mentioned above. Simply input the Kubernetes address as the Kubernetes FQDN.

Here is an example:

Instructions

The default value for Name is “kubernetes”, but it can be changed to another name. No further introduction is provided here.

In the Kubernetes URL section, you can fill in https://kubernetes.default, which is the DNS record corresponding to the Kubernetes Service. Through this DNS record, the Cluster IP of this Service can be resolved. Note: You can also fill in the complete DNS record https://kubernetes.default.svc.cluster.local, as it needs to conform to the naming format of service_name.namespace_name.svc.cluster_domain. You can also directly fill in the address of the external Kubernetes https://ClusterIP:Ports (not recommended).

In the Jenkins URL section, you should fill in the service address and port of Jenkins. For example, mine is http://jenkins.kube-system.svc.cluster.local:8080, which can also be written as http://jenkins.kube-system:8080, representing the service name and the namespace name where Jenkins is located. Similar to the setting of the Kubernetes URL, use the DNS record and port corresponding to the Jenkins Service. If the service is exposed through nodeport, you can also use http://NodeIP:NodePort (not configured in this example), modify it according to your actual situation.

Credentials: In the same cluster environment, the authentication process is not required, so you do not need to fill in the authentication credential.

Although Jenkins is now able to connect to Kubernetes, it cannot yet generate dynamic slave agents through Kubernetes. This is because the slave agent (jnlp-agent) communicates with the Jenkins master through port 50000 (default) when it starts. By default, this port in Jenkins is closed, so you need to open this port.

Manage Jenkins(system management) -–> (Configure global Security) -–> Agents

Instructions:

The port specified here is the port used by the jnlp-agent to connect to the Jenkins master.

If the Jenkins master is only started in a Docker container (without using a container orchestration system), be sure to expose this port to the outside, otherwise the Jenkins master will not know if the slave has been started and will keep creating pods until it exceeds the maximum number of retries.

The default value of this port is 50000. If you want to change it to another port, you need to modify the configuration of the Jenkins Tunnel parameter in the corresponding Jenkins Cloud when creating a Jenkins “cloud”. If you use port 50000, you don’t need to fill it in here. If you use another port, you need to set it separately. It is recommended to use the format of jenkins_url:port.

If you do not enable the agent port, when Jenkins dynamically generates slave nodes through Kubernetes, Jenkins will report the following error in the background, and the pods will be constantly created and deleted.

Now, the integration of Jenkins and Kubernetes is successful.

By default, the timeout setting of the agent agent connecting to Jenkins is 100 seconds. In special cases, if you want to set a different value, you can set the system property org.csanchez.jenkins.plugins.kubernetes.PodTemplate.connectionTimeout to a different value. But 100s is enough. If the proxy pod has not been generated after 100s, you need to troubleshoot the issue based on the Jenkins logs. If the Jenkins-Kubernetes connection is configured successfully, in most cases, the error is usually due to a problem with the configuration of the podtemplate, which will be discussed in subsequent chapters.

Example #

After configuring how to connect to Kubernetes, the following example is used to test whether the configuration is successful.

Put the following example into the pipeline script.

def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, cloud: 'kubernetes') {
    node(label) {
        stage('Run shell') {
            sh 'hostname'
            sh 'echo hello world.'
        }
    }
}

Instructions:

The label parameter in the PodTemplate is used to give the Pod a unique name.

The value of the cloud parameter in the PodTemplate must be the name of the cloud added in the system configuration.

In the node section, label is directly used to refer to the label variable defined above, which means that the commands are executed in the pod with that label name. If not specified, the pipeline script is still executed on the host where the Jenkins service is located.

The execution result is as follows:

Based on the above execution results, you can see the yaml definition of the resource object pod called by Jenkins through Kubernetes, the default image used is jenkins/jnlp-slave:3.35-5-alpine, the image name, and some environment variables of the pod.

In addition to using the Kubernetes plugin in pipeline type jobs, it can also be used in other types of jobs. However, in this case, you need to configure PodTemplate in the system configuration of Jenkins. I will not introduce it here, and will explain it in the practical section later.

Regardless of the type of job, during the job execution, you can use the kubectl command to view some detailed information about the running pod in the Kubernetes cluster. How is the resource definition of the pod configured? Can it be customized? This question will be answered in the next section, which will explain the basic syntax of Kubernetes.