29 Cluster Management How to Separate System Resources With Namespaces

29 Cluster Management How to Separate System Resources with Namespaces #

Hello, I’m Chrono.

In the previous lesson, we learned about resource quotas and liveness probes, which ensure that Pods, the micro-unit, run well. It is natural to wonder if there are similar methods at the macro level of the cluster to provide operational guarantees for Kubernetes.

There is no doubt about it, because Kubernetes is very considerate in all aspects and has many means to manage and control cluster resources.

Today, let’s take a look at some advanced uses of namespaces.

Why do we need namespaces? #

In fact, we have been exposed to Kubernetes namespaces very early on, such as in [Lesson 10] where we need to use the kube-system namespace to view the apiserver and other components, and also in [Lesson 20] where we have Service objects, and the complete domain name of DNS also uses namespaces.

However, our previous focus was on the Kubernetes architecture and API objects, and we did not pay much attention to namespaces. Also, it has been a while since then. So now let’s take a fresh look at namespaces.

First of all, we need to understand that Kubernetes namespaces are not physical entities, but rather a conceptual idea. It can divide the cluster into independent areas, and we can place objects into these areas to achieve the isolation effect similar to namespaces in container technology. Applications can only allocate resources and run within their own namespace and will not interfere with applications in other namespaces.

You may ask: Since the Kubernetes Master/Node architecture is already capable of managing clusters well, why introduce namespaces? What is its actual significance?

I think this is precisely a practical consideration for Kubernetes when dealing with large-scale clusters and a massive number of nodes. Because the cluster is large and there are abundant computing resources, there will be a large number of users creating various types of applications in Kubernetes, possibly reaching millions of Pods. This greatly increases the probability of resource contention and naming conflicts, similar to the situation in a single Linux system.

For example, let’s say there is a Kubernetes cluster being used by a frontend team, a backend team, and a testing team. In this case, naming conflicts can easily occur. For example, the backend team creates a Pod named “web” first, and this name is then “occupied”. After that, the frontend team and testing team will have to rack their brains to come up with a name that does not conflict. Resource contention can also occur easily. For instance, one day, the testing team accidentally deploys a buggy application that consumes all the resources on a node, making it impossible for colleagues from other teams to work.

Therefore, when multiple teams and projects share Kubernetes, in order to avoid these issues, we need to “localize” the cluster appropriately and create a “workspace” that belongs only to each type of user.

If Kubernetes is compared to a large ranch, the API objects are the chickens, ducks, cows, and sheep inside, and the namespaces are the fences that enclose and rear them. With their own suitable activity areas, Kubernetes can be used more efficiently and safely.

How to Use Namespaces #

A namespace is also an API object. You can see its abbreviation is “ns” by using the kubectl api-resources command. The kubectl create command does not require additional parameters and can easily create a namespace. For example:

kubectl create ns test-ns
kubectl get ns

When initializing a Kubernetes cluster, four namespaces are pre-defined: default, kube-system, kube-public, and kube-node-lease. The most commonly used ones are the first two. default is the default namespace for user objects, and kube-system is the namespace where system components are located, which you should be familiar with.

To place an object in a specific namespace, you need to add a namespace field to its metadata. For example, if we want to create a simple Nginx Pod in the “test-ns” namespace, we would write:

apiVersion: v1
kind: Pod
metadata:
  name: ngx
  namespace: test-ns

spec:
  containers:
  - image: nginx:alpine
    name: ngx

After creating this object with kubectl apply, we cannot see it directly using kubectl get because the default namespace being viewed is “default”. To operate on objects in other namespaces, you must explicitly use the -n parameter. For example:

kubectl get pod -n test-ns

Image

Since objects within a namespace are subordinate to that namespace, you must be cautious when deleting a namespace. Once a namespace is deleted, all objects within it will also disappear.

You can try executing kubectl delete to delete the namespace “test-ns” that was created earlier:

kubectl delete ns test-ns

Image

You will notice that after deleting the namespace, the Pods inside it will also vanish.

What is Resource Quota #

With namespaces, we can allocate resources for teams or projects by setting quotas, dividing the computation resources of the entire cluster into different sizes, and allocating them as needed.

However, clusters are different from single machines. In addition to limiting basic CPU and memory resources, the quantities of various objects must also be limited; otherwise, objects will compete for resources.

Resource quotas for namespaces require the use of a dedicated API object called ResourceQuota, abbreviated as quota. We can use the kubectl create command to create a template file for it:

export out="--dry-run=client -o yaml"
kubectl create quota dev-qt $out

Since the resource quota object must be attached to a namespace, the metadata field must explicitly specify the namespace (otherwise, it will be applied to the default namespace).

Let’s first create a namespace called “dev-ns” and then create a resource quota object called “dev-qt”:

apiVersion: v1
kind: Namespace
metadata:
  name: dev-ns
---

apiVersion: v1
kind: ResourceQuota
metadata:
  name: dev-qt
  namespace: dev-ns

spec:
  ... ...

The usage of the ResourceQuota object is flexible. It can limit the entire namespace’s quota or only certain types of objects (using scopeSelector). Today, we will focus on the first type, which requires using the hard field in the spec, meaning “hard global limitations”.

In ResourceQuota, various resource quotas can be set. There are many fields, so I have summarized them briefly. You can check the official documentation for detailed information:

  • CPU and memory quotas, using request.* and limits.*, which are the same as container resource limits.
  • Storage capacity quota, using requests.storage to limit the total storage of PVC. persistentvolumeclaims can also be used to limit the number of PVCs.
  • Core object quotas, using the names of objects (plural in English), such as pods, configmaps, secrets, services.
  • Other API object quotas, using the format count/name.group, such as count/jobs.batch, count/deployments.apps.

The following YAML is a relatively complete resource quota object:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: dev-qt
  namespace: dev-ns

spec:
  hard:
    requests.cpu: 10
    requests.memory: 10Gi
    limits.cpu: 10
    limits.memory: 20Gi

    requests.storage: 100Gi
    persistentvolumeclaims: 100

    pods: 100
    configmaps: 100
    secrets: 100
    services: 10

    count/jobs.batch: 1
    count/cronjobs.batch: 1
    count/deployments.apps: 1

Let me explain the global resource quotas added to the namespace:

  • The maximum total requirement of all Pods is 10 CPUs and 10GB of memory, with an upper limit of 10 CPUs and 20GB of memory.
  • Only 100 PVC objects can be created, using 100GB of persistent storage space.
  • Only 100 Pods, 100 ConfigMaps, 100 Secrets, and 10 Services can be created.
  • Only 1 Job, 1 CronJob, and 1 Deployment can be created.

This YAML file is relatively large and has many fields. If you find it not easy to read, you can also split it into several small YAML files, limiting resource quantities according to category, which may be more flexible. For example:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: cpu-mem-qt
  namespace: dev-ns

spec:
  hard:
    requests.cpu: 10
    requests.memory: 10Gi
    limits.cpu: 10
    limits.memory: 20Gi

apiVersion: v1
kind: ResourceQuota
metadata:
  name: core-obj-qt
  namespace: dev-ns

spec:
  hard:
    pods: 100
    configmaps: 100
    secrets: 100
    services: 10

How to Use Resource Quotas #

Now let’s use kubectl apply to create this resource quota object, and then use kubectl get to view it. Remember to use -n to specify the namespace:

kubectl apply -f quota-ns.yml
kubectl get quota -n dev-ns

Image

You can see that all the information of the ResourceQuota is outputted, but it is squeezed together and difficult to read. In this case, you can use the kubectl describe command to view the object, which will give you a clear table:

kubectl describe quota -n dev-ns

Image

Now let’s try running two busybox Jobs in this namespace. We still need to add the -n parameter:

kubectl create job echo1 -n dev-ns --image=busybox -- echo hello
kubectl create job echo2 -n dev-ns --image=busybox -- echo hello

Image

The ResourceQuota limits the namespace to have only one Job, so creating the second Job object will fail and exceed the resource quota.

If you use the kubectl describe command again, you will also find that the Job resources have reached the limit:

Image

However, as long as we delete the previous Job, we can run a new offline task again:

Image

Similarly, in this “dev-ns” namespace, you can only create one CronJob and one Deployment. You can try it yourself as an exercise.

Default Resource Quotas #

By now, you may have noticed that after adding resource quotas to the namespace, it has a reasonable but somewhat “annoying” constraint: it requires all running pods inside the namespace to declare their resource requirements with the resources field, otherwise they cannot be created.

For example, let’s say we want to create a pod using the kubectl run command:

kubectl run ngx --image=nginx:alpine -n dev-ns

图片

We receive a “Forbidden” error indicating that the quota requirement is not met.

The reason Kubernetes does this is quite understandable. As mentioned in the previous lesson, if a pod does not have the resources field, it can use CPU and memory without any limits, which clearly conflicts with the resource quotas of the namespace. In order to ensure that the total amount of resources in the namespace is manageable and controllable, Kubernetes has to reject the creation of such pods.

While this constraint is good for cluster management, it can be a bit troublesome for regular users. The YAML files are already large and complex, and now we have to add a few more fields and carefully estimate the resource quotas. If there are many small applications or temporary pods to run, the manpower required for this becomes relatively high, which is not very cost-effective.

So, is it possible to automatically add resource limits to pods in Kubernetes? In other words, can we have default values so that we can avoid the hassle of setting quotas repeatedly?

This is where a very small but very useful auxiliary object comes into play — LimitRange, abbreviated as limits, which can add default resource quota limits to API objects.

You can use the kubectl explain limits command to view detailed YAML field descriptions. Here are a few key points:

  • spec.limits is its core property, describing the default resource limits.
  • type is the type of object to be limited, which can be Container, Pod, or PersistentVolumeClaim.
  • default is the default resource limit, corresponding to resources.limits inside the container, only applicable to Container.
  • defaultRequest is the default resource request, corresponding to resources.requests inside the container, also applicable only to Container.
  • max and min are the maximum and minimum values of resources that objects can use.

The following YAML demonstrates a LimitRange object:

apiVersion: v1
kind: LimitRange
metadata:
  name: dev-limits
  namespace: dev-ns

spec:
  limits:
  - type: Container
    defaultRequest:
      cpu: 200m
      memory: 50Mi
    default:
      cpu: 500m
      memory: 100Mi
  - type: Pod
    max:
      cpu: 800m
      memory: 200Mi

It sets the default CPU request to 0.2 and memory request to 50MB for each container. The resource limit for each container is 0.5 CPU and 100MB memory. The maximum usage for each pod is 0.8 CPU and 200MB memory.

After applying the LimitRange using kubectl apply, you can use kubectl describe to see its status:

kubectl describe limitranges -n dev-ns

图片

Now, we can create pods without writing the resources field directly. Let’s run the previous kubectl run command:

kubectl run ngx --image=nginx:alpine -n dev-ns

With this default resource quota as a “safeguard,” no error occurs this time, and the pod is created successfully. By using kubectl describe to check the status of the pod, we can also see the resource quotas automatically added by the LimitRange:

图片

Summary #

Today we learned how to use namespaces to manage Kubernetes cluster resources.

In our lab environment, since there is only one user (you), it is not meaningful to use namespaces as you have exclusive access to all resources.

However, in production environments where multiple users share Kubernetes, there will inevitably be competition for resources. In order to promote fairness and avoid certain users consuming excessive resources, it is necessary to use namespaces to properly plan resource management in the cluster.

To summarize today’s content:

  1. Namespace is a logical concept without a physical presence. Its purpose is to define a logical boundary for resources and objects, to avoid conflicts.
  2. ResourceQuota objects can be used to add resource quotas to namespaces, limiting the global CPU, memory, and API object counts.
  3. LimitRange objects can be used to add default resource quotas to containers or Pods, simplifying object creation.

Homework #

Finally, it’s time for homework. I have two questions for you to think about:

  1. If you were a Kubernetes system administrator, how would you use namespaces to manage a production cluster?
  2. What basic principles do you think should be followed when setting resource quotas?

On this journey of learning together, I look forward to seeing your thoughts in the comments section. If you found today’s content helpful, feel free to share it with friends around you for further discussion. See you in the next class.

Image