08 Security Focus - Authentication and Authorization #
In this section, we will begin to learn about authentication and authorization, which are crucial aspects when applying Kubernetes to a production environment. Of course, not only for Kubernetes, but for any system used in a production environment, proper management of permissions or access control is highly important.
Overview #
Through previous learning, we already know that almost all operations in K8S need to go through the kube-apiserver
for processing. Therefore, for security reasons, K8S provides three measures for secure access. They are: authentication for identifying user identity, authorization for controlling user access to resources, and admission control for resource management.
The following diagram shows the basic process. Requests from clients need to go through authentication, authorization, and admission control before they can be executed.
Of course, when we directly access the kube-apiserver
through HTTP requests using kubectl proxy
, we can bypass the authentication process.
In addition, you can also bypass authentication and authorization by directly accessing the port configured by the --insecure-port
parameter on the machine where the kube-apiserver
is started. The default port is 8080. To avoid security issues, you can also set this parameter to 0 to bypass the problem. Note: this parameter and --insecure-bind-address
have been deprecated and will be removed in future versions.
+-----------------------------------------------------------------------------------------------------------+
| |
| +---------------------------------------------------------------------------+ +--------+ |
| | | | | |
| +--------+ | +------------------+ +----------------+ +--------------+ +------+ | | | |
| | | | | | | | | Admission | | | | | | |
| | Client +------> | Authentication +-> | Authorization +-> | Control +-> |Logic | +--> | Others | |
| | | | | | | | | | | | | | | |
| +--------+ | +------------------+ +----------------+ +--------------+ +------+ | | | |
| | | | | |
| | | | | |
| | Kube-apiserver | | | |
| +---------------------------------------------------------------------------+ +--------+ |
| |
+-----------------------------------------------------------------------------------------------------------+
Authentication #
Authentication is the process of determining whether the identity of the user initiating a request is correct. For example, when logging into a server, we usually need to enter a username and password, or SSH keys.
Before discussing authentication, let’s understand the types of users in K8S.
In K8S, there are two types of users: regular users and Service Accounts.
- Regular users: Regular users can only be managed through external services and their private keys are distributed by administrators. This means that there is no specific object representing regular users in K8S, so regular users cannot be directly added to the cluster through the API.
- Service Account: A user managed by the K8S API, bound to a specific namespace. It is automatically created by the API server or manually created through the API. It is also automatically mounted into the
/var/run/secrets/kubernetes.io/serviceaccount/
directory of containers in pods, which contains information such as the namespace token, allowing processes in the cluster to interact with the API server.
API operations for the cluster are associated with users, or considered anonymous requests. Anonymous requests can be controlled by the --anonymous-auth
parameter of kube-apiserver
, which is enabled by default. The default username for anonymous users is system:anonymous
and the associated group is system:unauthenticated
.
After understanding the users in K8S, let’s take a look at the authentication mechanisms in K8S.
K8S supports the following authentication mechanisms:
- X509 client certificate: This authentication mechanism is familiar to us. When we set up the cluster earlier, although we didn’t specify the configuration file,
kubeadm
added the default parameter--client-ca-file=/etc/kubernetes/pki/ca.crt
. During authentication, theCN
(Common Name) domain of the client certificate’s subject is used as the username, and theO
(Organization) domain is used as the group name. - Bootstrap Token: This mechanism is also familiar to us. When setting up the cluster, after the cluster is initialized using
kubeadm init
, a line of prompt will be displayed, which carries the bootstrap token. If you don’t usekubeadm
, you need to set--enable-bootstrap-token-auth=true
. - Static Token File: When starting
kube-apiserver
, set--token-auth-file=SOMEFILE
and include theAuthorization: Bearer TOKEN
request header in the request. - Static Password File: Similar to the static token file, set
--basic-auth-file=SOMEFILE
when startingkube-apiserver
, and include theAuthorization: Basic BASE64ENCODED(USER:PASSWORD)
header in the request. - Service Account Token: This is the mechanism that is enabled by default, and it has been introduced earlier regarding Service Accounts.
- OpenID: This provides authentication support for OAuth2. Cloud providers such as Azure or Google provide related support.
- Authentication Proxy: This is mainly used in conjunction with an authentication proxy, such as providing a common authorization gateway for users.
- Webhook: This provides a webhook that is used in conjunction with a remote server.
Multiple authentication mechanisms can be enabled at the same time. For example, when creating a cluster using kubeadm
, the X509 client certificate and bootstrap token authentication mechanisms are enabled by default.
Authorization #
Authorization is the process of verifying whether the current user initiating the request has the necessary permissions. For example, this is commonly seen in Linux systems when dealing with folder permissions.
Authorization is based on the result of authentication. The authorization mechanism examines the attributes included in the request after the user has been authenticated to make a judgment.
Kubernetes (K8S) supports various authorization mechanisms. In order for users to operate resources correctly, they must obtain authorization. Therefore, by default, Kubernetes denies all permissions. Once a specific authorization mechanism either allows or denies a request, it immediately returns a response without further checking other authorization mechanisms. If none of the authorization mechanisms have passed, a 403 error will be returned.
Kubernetes supports the following authorization mechanisms:
- ABAC (Attribute-Based Access Control): This mechanism is based on attributes and requires configuring
--authorization-mode=ABAC
and--authorization-policy-file=SOME_FILENAME
. ABAC itself is well-designed, but it is somewhat cumbersome to use in Kubernetes, so I will not go into further detail here. - RBAC (Role-based Access Control): This mechanism is based on roles. It was introduced in beta in Kubernetes 1.6 and became stable in version 1.8. It is widely used. When installing a Kubernetes cluster using
kubeadm
, the--authorization-mode=Node,RBAC
parameter will be added by default, indicating that both theNode
andRBAC
authorization mechanisms are enabled. If you are familiar with or have knowledge of MongoDB, you will find this part easier to understand because MongoDB also usesRBAC
(Role-based Access Control) for its permission control. - Node: This is a special-purpose authorization mechanism used to authenticate API requests made by
kubelet
. - Webhook: This mechanism uses an external server to perform authorization verification via the API. You need to add
--authorization-webhook-config-file=SOME_FILENAME
and--authorization-mode=Webhook
when starting Kubernetes. - AlwaysAllow: This is the default configuration that allows all permissions.
- AlwaysDeny: This is typically used for testing purposes and denies all permissions.
Roles #
In the previous section, RBAC
was mentioned. In order to better understand it, we need to first understand the roles in K8S. There are two main categories of roles in K8S: Role
and ClusterRole
.
Role
: It can be thought of as a collection of permissions, but limited to a specificNamespace
(KubernetesNamespace
).ClusterRole
: It is not limited to aNamespace
for cluster-level resources, and also has some non-resource-related requests, which is why it exists.
Once we understand the roles, the remaining step is to bind the roles to users. In K8S, this process is called binding, namely rolebinding
and clusterrolebinding
. Let’s take a look at the situation when the cluster is initially initialized:
➜ ~ kubectl get roles --all-namespaces=true
NAMESPACE NAME AGE
kube-public kubeadm:bootstrap-signer-clusterinfo 1h
kube-public system:controller:bootstrap-signer 1h
kube-system extension-apiserver-authentication-reader 1h
kube-system kube-proxy 1h
kube-system kubeadm:kubelet-config-1.12 1h
kube-system kubeadm:nodes-kubeadm-config 1h
kube-system system::leader-locking-kube-controller-manager 1h
kube-system system::leader-locking-kube-scheduler 1h
kube-system system:controller:bootstrap-signer 1h
kube-system system:controller:cloud-provider 1h
kube-system system:controller:token-cleaner 1h
➜ ~ kubectl get rolebindings --all-namespaces=true
NAMESPACE NAME AGE
kube-public kubeadm:bootstrap-signer-clusterinfo 1h
kube-public system:controller:bootstrap-signer 1h
kube-system kube-proxy 1h
kube-system kubeadm:kubelet-config-1.12 1h
kube-system kubeadm:nodes-kubeadm-config 1h
kube-system system::leader-locking-kube-controller-manager 1h
kube-system system::leader-locking-kube-scheduler 1h
kube-system system:controller:bootstrap-signer 1h
kube-system system:controller:cloud-provider 1h
kube-system system:controller:token-cleaner 1h
We can see that there are already some default roles
and rolebindings
. We won’t go into much detail about this part for now. Let’s take a look at ClusterRole
, which is effective globally for the cluster:
➜ ~ kubectl get clusterroles
NAME AGE
admin 1h
cluster-admin 1h
edit 1h
flannel 1h
system:aggregate-to-admin 1h
system:aggregate-to-edit 1h
system:aggregate-to-view 1h
system:auth-delegator 1h
system:aws-cloud-provider 1h
system:basic-user 1h
system:certificates.k8s.io:certificatesigningrequests:nodeclient 1h
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 1h
system:controller:attachdetach-controller 1h
system:controller:certificate-controller 1h
system:controller:clusterrole-aggregation-controller 1h
system:controller:cronjob-controller 1h
system:controller:daemon-set-controller 1h
system:controller:deployment-controller 1h
system:controller:disruption-controller 1h
system:controller:endpoint-controller 1h
system:controller:expand-controller 1h
system:controller:generic-garbage-collector 1h
system:controller:horizontal-pod-autoscaler 1h
system:controller:job-controller 1h
system:controller:namespace-controller 1h
system:controller:node-controller 1h
system:controller:persistent-volume-binder 1h
system:controller:pod-garbage-collector 1h
system:controller:pv-protection-controller 1h
system:controller:pvc-protection-controller 1h
system:controller:replicaset-controller 1h
system:controller:replication-controller 1h
system:controller:resourcequota-controller 1h
system:controller:route-controller 1h
system:controller:service-account-controller 1h
system:controller:service-controller 1h
system:controller:statefulset-controller 1h
system:controller:ttl-controller 1h
system:coredns 1h
system:csi-external-attacher 1h
system:csi-external-provisioner 1h
system:discovery 1h
system:heapster 1h
system:kube-aggregator 1h
system:kube-controller-manager 1h
system:kube-dns 1h
system:kube-scheduler 1h
system:kubelet-api-admin 1h
system:node 1h
system:node-bootstrapper 1h
system:node-problem-detector 1h
system:node-proxier 1h
system:persistent-volume-provisioner 1h
system:volume-scheduler 1h
view 1h
➜ ~ kubectl get clusterrolebindings
NAME AGE
cluster-admin 1h
flannel 1h
kubeadm:kubelet-bootstrap 1h
kubeadm:node-autoapprove-bootstrap 1h
kubeadm:node-autoapprove-certificate-rotation 1h
kubeadm:node-proxier 1h
system:aws-cloud-provider 1h
system:basic-user 1h
system:controller:attachdetach-controller 1h
system:controller:certificate-controller 1h
system:controller:clusterrole-aggregation-controller 1h
system:controller:cronjob-controller 1h
system:controller:daemon-set-controller 1h
system:controller:deployment-controller 1h
system:controller:disruption-controller 1h
system:controller:endpoint-controller 1h
system:controller:expand-controller 1h
system:controller:generic-garbage-collector 1h
system:controller:horizontal-pod-autoscaler 1h
system:controller:job-controller 1h
system:controller:namespace-controller 1h
system:controller:node-controller 1h
system:controller:persistent-volume-binder 1h
system:controller:pod-garbage-collector 1h
system:controller:pv-protection-controller 1h
system:controller:pvc-protection-controller 1h
system:controller:replicaset-controller 1h
system:controller:replication-controller 1h
system:controller:resourcequota-controller 1h
system:controller:route-controller 1h
system:controller:service-account-controller 1h
system:controller:service-controller 1h
system:controller:statefulset-controller 1h
system:controller:ttl-controller 1h
system:coredns 1h
system:discovery 1h
system:kube-controller-manager 1h
system:kube-dns 1h
system:kube-scheduler 1h
system:node 1h
system:node-proxier 1h
system:volume-scheduler 1h
We can see that there are already many ClusterRoles
and clusterrolebindings
in K8S by default. Let’s take a closer look at one of them.
Viewing User Permissions #
We have been using kubectl
to operate on the cluster. What are the permissions of the current user? How does it correspond in terms of RBAC
?
➜ ~ kubectl config current-context # Get the current context
kubernetes-admin@kubernetes # The user named kubernetes-admin is on the kubernetes cluster
➜ ~ kubectl config view users -o yaml # View user configuration, some contents are omitted below
apiVersion: v1
clusters:
- cluster:
...
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
The part of client-certificate-data
is not displayed by default, and its content is actually the certificate content encrypted by base64. We can view it as follows:
➜ ~ kubectl config view users --raw -o jsonpath='{ .users[?(@.name == "kubernetes-admin")].user.client-certificate-data}' |base64 -d
-----BEGIN CERTIFICATE-----
MIIC8jCCAdqgAwIBAgIIGuC27C9B8LIwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
...
kae1A/d4D5Cm5Qt7M5gr3SxqE5t+O7DP0YhuEPlfY7RzYDksYa8=
-----END CERTIFICATE-----
➜ ~ kubectl config view users --raw -o jsonpath='{ .users[?(@.name == "kubernetes-admin")].user.client-certificate-data}' |base64 -d |openssl x509 -text -noout # Some output is omitted for brevity
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1936748965290700978 (0x1ae0b6ec2f41f0b2)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Subject: O=system:masters, CN=kubernetes-admin
...
Based on the previous authentication part, we know that the current user is kubernetes-admin
(CN domain) and belongs to the system:masters
group (O domain).
Let’s take a look at clusterrolebindings
and cluster-admin
.
➜ ~ kubectl get clusterrolebindings cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
...
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "116"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
uid: 71c550f1-e0e4-11e8-866a-fa163e938a99
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
The important content is in roleRef
and subjects
, where the ClusterRole
named cluster-admin
is bound to the Group
named system:masters
. Let’s further explore what they represent.
First, we look at the actual content of this ClusterRole
:
➜ ~ kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
...
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "58"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin
uid: 71307108-e0e4-11e8-866a-fa163e938a99
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
rules
defines the resources it can operate on and the corresponding actions, where *
is a wildcard.
Based on this, we can conclude that the current user, kubernetes-admin
, belongs to the system:masters
group, which is bound to the ClusterRole
cluster-admin
. Therefore, the user inherits its permissions and has the ability to perform various operations on multiple resources and APIs.
Practice: Creating a User with Controllable Permissions #
In the previous section, we deduced the permissions a practical user should have. Now, let’s move on to the practical part and create a user with the appropriate authorizations.
The username we want to create is backend
, and it belongs to the group dev
.
Creating a Namespace #
To demonstrate, let’s create a new Namespace
named work
.
➜ ~ kubectl create namespace work
namespace/work created
➜ ~ kubectl get ns work
NAME STATUS AGE
work Active 14s
Creating the User #
Generating a Private Key #
➜ ~ mkdir work
➜ ~ cd work
➜ work openssl genrsa -out backend.key 2048
Generating RSA private key, 2048 bit long modulus
..........................................+++
........................+++
e is 65537 (0x10001)
➜ work ls
backend.key
➜ work cat backend.key
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAzk7blZthwSzachPxrk6pHsuaImTVh6Iw8mNDmtn6sqOqBfZS
...
bNKDWDk8HZREugaOAwjt7xaWOlr9SPCCoXrWoaA1z2215IC4qSA2Nw==
-----END RSA PRIVATE KEY-----
Generating a Certificate Signing Request (CSR) using the Private Key. Previously, we covered the authentication part, and here, we need to specify the subject
information and pass the username and group name.
#
➜ work openssl req -new -key backend.key -out backend.csr -subj "/CN=backend/O=dev"
➜ work ls
backend.csr backend.key
➜ work cat backend.csr
-----BEGIN CERTIFICATE REQUEST-----
MIICZTCCAU0CAQAwIDEQMA4GA1UEAwwHYmFja2VuZDEMMAoGA1UECgwDZGV2MIIB
...
lpoSVlNA0trJoiEiZjUqMfXX6ogBhQC4aeRfmbXkW2ZCNxsIm3PDk1Y=
-----END CERTIFICATE REQUEST-----
Signing the CSR using the CA. The default certificate directory for K8S is /etc/kubernetes/pki
.
#
➜ work openssl x509 -req -in backend.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out backend.crt -days 365
Signature ok
subject=/CN=backend/O=dev
Getting CA Private Key
➜ work ls
backend.crt backend.csr backend.key
Viewing the generated certificate file.
➜ work openssl x509 -in backend.crt -text -noout
Certificate:
Data:
Version: 1 (0x0)
Serial Number:
d9:7f:62:f7:38:66:2a:7b
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Subject: CN=backend, O=dev
...
As you can see, the CN
(Common Name) and O
(Organization) fields have been set correctly.
Adding the Context #
➜ work kubectl config set-credentials backend --client-certificate=/root/work/backend.crt --client-key=/root/work/backend.key
User "backend" set.
➜ work kubectl config set-context backend-context --cluster=kubernetes --namespace=work --user=backend
Context "backend-context" created.
Testing Access with the New User #
➜ work kubectl --context=backend-context get pods
Error from server (Forbidden): pods is forbidden: User "backend" cannot list resource "pods" in API group "" in the namespace "work"
# If it's not clear enough, we can add the `-v` parameter to display details
➜ work kubectl --context=backend-context get pods -n work -v 5
I1109 05:35:11.870639 18626 helpers.go:201] server response object: [{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"backend\" cannot list resource \"pods\" in API group \"\" in the namespace \"work\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}]
F1109 05:35:11.870688 18626 helpers.go:119] Error from server (Forbidden): pods is forbidden: User "backend" cannot list resource "pods" in API group "" in the namespace "work"
As you can see, we have successfully used the new backend
user, and the default Namespace
has been set to work
.
Creating a Role #
We want this user to have permissions to only view Pods
. Let’s start by creating a configuration file.
➜ work cat <<EOF > backend-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: work
name: backend-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
EOF
Create and view the generated Role
.
➜ work kubectl create -f backend-role.yaml
role.rbac.authorization.k8s.io/backend-role created
➜ work kubectl get roles -n work -o yaml
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
...
name: backend-role
namespace: work
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/work/roles/backend-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Create Rolebinding #
First create a configuration file.
➜ work cat <<EOF > backend-rolebind.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: backend-rolebinding
namespace: work
subjects:
- kind: User
name: backend
apiGroup: ""
roleRef:
kind: Role
name: backend-role
apiGroup: ""
EOF
Create and view the generated Rolebinding.
➜ work kubectl create -f backend-rolebind.yaml
rolebinding.rbac.authorization.k8s.io/backend-rolebinding created
➜ work kubectl get rolebinding -o yaml -n work
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: backend-rolebinding
namespace: work
...
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: backend-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: backend
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Test user permissions #
➜ work kubectl --context=backend-context get pods -n work
No resources found.
➜ work kubectl --context=backend-context get ns
Error from server (Forbidden): namespaces is forbidden: User "backend" cannot list resource "namespaces" in API group "" at the cluster scope
➜ work kubectl --context=backend-context get deploy -n work
Error from server (Forbidden): deployments.extensions is forbidden: User "backend" cannot list resource "deployments" in API group "extensions" in the namespace "work"
As you can see, the user has permission to view Pods
, but cannot view other resources such as Namespace
or Deployment
. Of course, K8S also has a convenient debugging mechanism.
➜ work kubectl auth can-i list pods -n work --as="backend"
yes
➜ work kubectl auth can-i list deploy -n work --as="backend"
no - no RBAC policy matched
The --as
flag is based on the authentication mechanism of K8S. It allows system administrators to verify authorization or debug.
You can also write the currently generated certificate, private key, and other files into the configuration file, following the format of the ~/.kube/config
file. You can either use the KUBECONFIG
environment variable or pass the --kubeconfig
flag to kubectl
to use it.
Summary #
In this section, we learned about the authentication and authorization logic of K8S. K8S supports multiple authentication and authorization modes that can be used as needed. Using X509 client certificate authentication is convenient and recommended, as the CN
field and O
field of the client certificate can specify the username and group name.
RBAC is the most commonly used authorization mode, which allows binding Role
and subjects
(users or groups) to achieve authorization purposes.
Finally, we actually created a new user and granted them the expected permissions. This process also involves the usual operations of the openssl
client, which will be frequently used later.
In the next section, we will start deploying actual projects into K8S and gradually master the practical use of K8S in production environments.
PS: You may find operations like switching Namespace
cumbersome. There is a project called kubectx that can simplify these steps for you. It is recommended to give it a try.