48 Iam Containerized Deployment Practice

48 IAM Containerized Deployment Practice #

Hello, I am Kong Lingfei.

In Lesson 45, I introduced a cloud-native architecture design solution based on Kubernetes. In a cloud-native architecture, we deploy cloud-native applications using Docker + Kubernetes. In this lesson, I will guide you step by step on how to deploy the IAM application in a Kubernetes cluster. Since there are many steps involved, I hope you can follow along and complete each operation. I believe that through this practical experience, you will learn more knowledge as well.

Preparations #

Before deploying the IAM application, we need to do the following preparations:

  1. Activate the Tencent Cloud Container Service Image Repository.
  2. Install and configure Docker.
  3. Prepare a Kubernetes cluster.

Activating the Tencent Cloud Container Service Image Repository #

To deploy the IAM application in a Kubernetes cluster, we need to download the specified IAM image from the image repository. Therefore, we need to have an image repository to host the IAM image. We can choose to host the IAM image on DockerHub, which is the default address for Docker runtime to obtain images.

However, because the DockerHub service is deployed overseas, the access speed from China is very slow. Therefore, I suggest hosting the IAM image in a domestic image repository. Here, we can choose the image repository service provided by Tencent Cloud, and the access address is Container Registry (Personal Edition).

If you already have a Tencent Cloud image repository, you can skip the steps to activate the Tencent Cloud image repository.

The specific steps to activate the Tencent Cloud image repository are as follows:

Step 1: Activate the Personal Edition of the Image Repository.

  1. Log in to the Container Service Console and select Image Repository > Personal Edition in the left navigation pane.

  2. Fill in the relevant information based on the prompts below and click Activate to initialize. As shown in the following figure:

Image

  • Username: By default, it is the account ID of the current user, which is your identity when logging in to the Tencent Cloud Docker image repository and can be obtained on the Account Information page.
  • Password: This is the credential for logging in to the Tencent Cloud Docker image repository.

Here, you need to record the username and password for pushing and pulling images. Assuming that the username for the image repository we activated is 10000099xxxx and the password is iam59!z$.

Note that 10000099xxxx should be replaced with the username of your image repository.

Step 2: Log in to the Tencent Cloud Registry (image repository).

After activating the Registry, we can log in to the Registry. You can log in to the Tencent Cloud Registry using the following command:

$ docker login --username=[username] ccr.ccs.tencentyun.com

Here, username is the Tencent Cloud account ID that was registered during activation and can be obtained on the Account Information page. The docker command will be installed later.

Step 3: Create a namespace in the image repository.

If you want to use the image repository, you need to create a namespace used to create images first. After activating the image repository, you can create a namespace on the “Namespaces” tab. It is shown in the following figure:

Image

In the figure above, we created a namespace called marmotedu.

Here, you might be confused about the concepts of image repository service, namespace, image repository, and tag. Next, I will explain the relationship between them in detail, as shown in the following figure:

First, let’s look at the format of using the image repository: <Image Repository Service Address>/<Namespace>/<Image Repository>:<Tag>, for example, ccr.ccs.tencentyun.com/marmotedu/iam-apiserver-amd64:v1.1.0.

If you want to use a Docker image, you first need to activate an image repository service (Registry). The image repository service will provide a fixed address for you to access externally. In the Registry, we (User) can create one or more namespaces, which can be simply understood as logical groups in the image repository.

Next, you can create one or more image repositories in the namespace, such as iam-apiserver-amd64, iam-authz-server-amd64, iam-pump-amd64, and so on. For each image repository, multiple tags can be created, such as v1.0.1, v1.0.2, etc.

The <Image Repository>:<Tag> is called an image. Images can be either private or public. Public images can be downloaded and used by all users who can access the image repository, while private images are only available to authorized users.

Installing Docker #

After activating the image repository, we also need to install Docker to build and test Docker images. Now I will explain the specific installation steps.

Step 1: Check the prerequisites for installing Docker.

Make sure that the centos-extras yum source is enabled on the CentOS system. By default, it is enabled. You can check it as follows:

$ cat /etc/yum.repos.d/CentOS-Extras.repo
# Qcloud-Extras.repo

[extras]
name=Qcloud-$releasever - Extras
baseurl=http://mirrors.tencentyun.com/centos/$releasever/extras/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Qcloud-8

If the file /etc/yum.repos.d/CentOS-Extras.repo exists and the enabled configuration item value in the extras section of the file is 1, it indicates that the centos-extras yum source has been enabled. If the file /etc/yum.repos.d/CentOS-Extras.repo does not exist or the enabled value is not 1, you need to create the file /etc/yum.repos.d/CentOS-Extras.repo and copy the above content into it.

Step 2: Install Docker.

The Docker official document Install Docker Engine on CentOS provides three installation methods:

  • Install through the yum source.
  • Install through the RPM package.
  • Installation via script.

Here, we will choose the simplest installation method: Installation via Yum repository. It consists of the following 3 steps:

  1. Install Docker.
$ sudo yum install -y yum-utils # 1. Install the `yum-utils` package, which provides the `yum-config-manager` tool
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # 2. Install `docker-ce.repo` yum repository
$ sudo yum-config-manager --enable docker-ce-nightly docker-ce-test # 3. Enable `nightly` and `test` yum repositories
$ sudo yum install -y docker-ce docker-ce-cli containerd.io # 4. Install the latest version of Docker Engine and containerd
  1. Start Docker.

You can start Docker with the following command:

$ sudo systemctl start docker

The Docker configuration file is /etc/docker/daemon.json, which is not created by default and needs to be created manually:

$ sudo tee /etc/docker/daemon.json << EOF
{
  "bip": "172.16.0.1/24",
  "registry-mirrors": [],
  "graph": "/data/lib/docker"
}
EOF

Here, let me explain the commonly used configuration parameters.

  • registry-mirrors: The repository address can be modified to a specified address as needed.
  • graph: The path for storing images and containers, which is /var/lib/docker by default. If your / directory does not have enough storage space, you need to set graph to a larger directory.
  • bip: Specify the IP subnet for containers.

After the configuration is complete, Docker needs to be restarted:

$ sudo systemctl restart docker
  1. Test if Docker is installed successfully.
$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:0fe98d7debd9049c50b597ef1f85b7c1e8cc81f59c8d623fcb2250e8bec85b38
Status: Downloaded newer image for hello-world:latest
...
Hello from Docker!
This message shows that your installation appears to be working correctly.
...

The docker run hello-world command will download the hello-world image and start the container. After printing the installation success message, it will exit.

Note: If you fail to install via the Yum repository, you can try other installation methods provided by Docker’s official documentation Install Docker Engine on CentOS.

Step 3: Configuration after installation.

After successful installation, we need to perform some additional configurations. There are two main configurations: configuring Docker to be used by the non-root user and configuring Docker to start on boot.

  1. Using Docker with the non-root user

When operating on a Linux system, for security purposes, we need to log in to the system as a regular user and perform operations. Therefore, we need to configure Docker to be used by the non-root user. The specific configuration method is as follows:

$ sudo groupadd docker # 1. Create the `docker` user group
$ sudo usermod -aG docker $USER # 2. Add the current user to the `docker` user group
$ newgrp docker # 3. Reload the group membership
$ docker run hello-world # 4. Confirm that Docker can be used by a regular user

If you encounter the error groupadd: group 'docker' already exists when executing sudo groupadd docker, it means that the docker group already exists and you can ignore this error.

If you have run the docker command using sudo before adding the user to the docker group, you may see the following error:

WARNING: Error loading config file: /home/user/.docker/config.json -
stat /home/user/.docker/config.json: permission denied

To solve this error, we can delete the ~/.docker/ directory or change the owner and permissions of the ~/.docker/ directory using the following commands:

$ sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
$ sudo chmod g+rwx "$HOME/.docker" -R
  1. Configure Docker to start at boot

Most Linux distributions (RHEL, CentOS, Fedora, Debian, Ubuntu 16.04 and higher) use systemd to manage services, including specifying services to start at boot. Docker is configured to start at boot by default on Debian and Ubuntu.

On other systems, we need to manually configure Docker to start at boot using the following commands (both the docker and containerd services need to be configured):

To automatically start Docker and Containerd at boot for other distributions, you can use the following commands:

$ sudo systemctl enable docker.service # Enable Docker to start at boot
$ sudo systemctl enable containerd.service # Enable Containerd to start at boot

To disable Docker and Containerd from starting at boot, you can use the following commands:

$ sudo systemctl disable docker.service # Disable Docker from starting at boot
$ sudo systemctl disable containerd.service # Disable Containerd from starting at boot

Prepare a Kubernetes Cluster #

After installing Docker, you also need a Kubernetes cluster to schedule Docker containers. Installing a Kubernetes cluster can be complex, so here is the simplest way to prepare a Kubernetes cluster: purchase a Tencent Cloud Serverless cluster.

Tencent Cloud Serverless cluster is a cluster type introduced by Tencent Cloud Container Service that allows you to deploy workloads without purchasing nodes. You can think of it as a standard Kubernetes cluster, with the difference being that the Serverless cluster is created and maintained by the Tencent Cloud Container Service team. You only need to access the cluster, deploy your resources, and pay for the actual resource usage of the containers. You can purchase a Serverless cluster by logging in to the Tencent Cloud Container Service console (https://console.cloud.tencent.com/tke2).

If you want to build your own Kubernetes cluster, I recommend purchasing 3 Tencent Cloud CVM instances and following the instructions in the follow-me-install-kubernetes-cluster tutorial to build the Kubernetes cluster step by step. The recommended minimum configuration for the CVM instances is as follows:

CVM configuration

Introduction to EKS #

Let me briefly introduce what EKS is. EKS (Elastic Kubernetes Service) is the Tencent Cloud Elastic Container Service, which is a service mode introduced by Tencent Cloud Container Service that allows you to deploy workloads without purchasing nodes. It is fully compatible with native Kubernetes and supports creating and managing resources in a native way. The billing is based on the actual resource usage of the containers. EKS also extends support for Tencent Cloud’s storage and networking products, while ensuring the security isolation of user containers and providing out-of-the-box functionality.

EKS Pricing #

So, how is it priced? EKS is a fully managed Serverless Kubernetes service that does not charge for the managed master, etcd, and other resources. The workloads running in the Elastic cluster are billed in a post-paid pay-as-you-go model, with the fees calculated based on the actual configured resource quantity and usage time. In other words, the Kubernetes cluster itself is free, and only the usage of the node resources when running workloads is charged.

There are 3 billing modes for EKS: reserved tickets, pay-as-you-go, and spot instances. I recommend choosing pay-as-you-go. With pay-as-you-go, you are billed per second and settled per hour. You can purchase and release resources anytime. From the perspective of learning from this column, it is the lowest-cost option. EKS calculates the fees based on the CPU and memory values requested by the workloads and their running time. For the specific pricing, you can refer to the Pricing | Elastic Container Service page. Here, I will use an example to explain the cost issue. The IAM application will deploy 4 Deployments, with each Deployment having a replica:

  • iam-apiserver: IAM REST API service, providing CRUD functionality for user, key, and policy resources.
  • iam-authz-server: IAM resource authorization service, providing resource authorization interfaces.
  • iam-pump: IAM data cleaning service, retrieves authorization logs from Redis, processes them, and saves them in MongoDB.
  • iamctl: IAM application test service, logging into the iamctl Pod allows you to execute iamctl commands and smoke test scripts, completing the operation and testing of the IAM application.

The Pod configurations for the 4 Deployments are all 0.25 cores and 512Mi memory.

Here, we calculate the cost of deploying IAM for one day based on the EKS cost calculation formula Cost = Relevant billing item configuration x Resource unit price x Running time:

Total cost = (4 x 1) x (0.25 x 0.12 + 0.5 x 0.05) x 24 = 4.8 yuan

This means that by deploying the IAM application with the minimum configuration, the cost of running it for one day is 4.8 yuan (the cost of a bottle of water, but you can learn how to deploy IAM applications on the Kubernetes platform, which is very valuable!). You may wonder what each value in this calculation formula represents. Let me explain:

  • (4 x 1): Total number of Kubernetes Pods (there are 4 Deployments in total, with 1 replica per Pod).
  • 0.25 x 0.12: Cost of CPU configuration for continuously running for 1 hour.
  • 0.5 x 0.05: Cost of memory configuration for continuously running for 1 hour.
  • 24: 24 hours, i.e., one day.

Please note that to help you save costs, the above configurations are the minimum configurations. In the actual production environment, the suggested configurations are as follows: - Because the iam-pump component is stateful and currently does not implement preemption mechanism, the number of replicas needs to be set to 1.

In addition, the configuration costs for Intel’s pay-as-you-go pricing are shown in the following figure:

Image

Here’s an important reminder for you: After completing this section, destroy these Deployments to avoid continuous charges. It is recommended not to exceed 50 yuan in your Tencent Cloud account balance.

Applying for an EKS Cluster #

After understanding EKS and related cost issues, let’s take a look at how to apply for an EKS cluster. You can follow these 5 steps to apply for an EKS cluster. Before applying, please make sure that your Tencent Cloud account has a balance greater than 10 yuan, otherwise, errors may occur during the creation and use of the EKS cluster due to insufficient funds.

  1. Creating a Tencent Cloud Elastic Cluster

The specific steps are as follows:

First, log in to the Container Service console, select Elastic Cluster from the left navigation pane. Then, click Create in the upper right corner to choose the region where the elastic cluster will be created. On the “Create an Elastic Cluster” page, set the cluster information according to the following prompts. As shown in the figure below:

Image

I will explain the meaning of each selection option on the page for you:

  • Cluster Name: The name of the created Elastic Cluster, up to 60 characters.
  • Kubernetes Version: Elastic Cluster supports multiple versions of Kubernetes 1.12 and above. It is recommended to choose the latest version.
  • Region: It is recommended to choose a region close to your geographic location to reduce access latency and improve download speed.
  • Cluster Network: The VPC network of the created Elastic Cluster. You can choose a subnet in the private network for the container network of the Elastic Cluster. For details, please refer to Virtual Private Cloud (VPC).
  • Container Network: IP addresses assigned to the cluster containers within the container network address range. Pods in the Elastic Cluster will directly occupy the subnet IP of the VPC. Please choose a subnet with sufficient IP addresses and no conflicts with other products.
  • Service CIDR: The ClusterIP Service of the cluster is assigned within the selected VPC subnet. Please choose a subnet with sufficient IP addresses and no conflicts with other products.
  • Cluster Description: Relevant information for creating the cluster, which will be displayed on the “Cluster Information” page.

Once the settings are complete, click Complete to start the creation process. You can view the progress of cluster creation on the “Elastic Cluster” list page.

When the Elastic Cluster creation is complete, the Elastic Cluster page will look like the following image:

Image

The ID of the Elastic Cluster we created is cls-dc6sdos4.

  1. Enable External Access

If you want to access the EKS cluster, you need to enable external access for EKS. Here is how you can enable it:

Log in to the Container Service Console -> Select Elastic Cluster from the left sidebar -> Go to the details page of the cls-dc6sdos4 cluster -> Select Basic Information -> Click the External Access button. It will look like the following image:

Image

Note that when enabling external access, for security reasons, you need to set the IP range that is allowed to access kube-apiserver. To avoid unnecessary errors, we set the external access address to 0.0.0.0/0. It will look like the following image:

Image

Please note that setting it to 0.0.0.0/0 is only for testing purposes. If it is a production environment, it is recommended to strictly limit the source IP that can access kube-apiserver.

  1. Install kubectl Command-line Tool

To access EKS (standard Kubernetes clusters) efficiently, it is recommended to use the kubectl command-line tool provided by Kubernetes. Therefore, you need to install the kubectl tool.

Here is how you can install it:

$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ mkdir -p $HOME/bin
$ mv kubectl $HOME/bin
$ chmod +x $HOME/bin/kubectl

For more details, please refer to Install and Set Up kubectl. You can configure bash auto-completion for kubectl with the following commands:

$ kubectl completion bash > $HOME/.kube-completion.bash
$ echo 'source $HOME/.kube-completion.bash' >> ~/.bashrc
$ bash
  1. Download and Install kubeconfig

After installing the kubectl tool, you need to configure the configuration file that kubectl reads.

Please note that in the previous step, we enabled external access, and after enabling it, EKS will generate a kubeconfig configuration file. We can download and install it from the page.

On the Basic Information page of the Elastic Cluster, click the Copy button to copy the content of the kubeconfig file. It will look like the following image:

Image

After copying, save the content from the clipboard to the $HOME/.kube/config file. First, execute mkdir -p $HOME/.kube to create the .kube directory, and then write the content from the clipboard to the config file.

You can test whether the kubectl tool is successfully installed and configured by using the following command:

$ kubectl get nodes
NAME                    STATUS   ROLES    AGE    VERSION
eklet-subnet-lowt256k   Ready    <none>   2d1h   v2.5.21

If it outputs the eklet nodes of Kubernetes and their status is Ready, it means the Kubernetes cluster is running normally and the kubectl installation and configuration are correct.

  1. Enable Cluster Internal Service to Access the External Network

Because the databases in IAM applications, such as MariaDB, Redis, and MongoDB, may need to be accessed via the external network, you also need to enable the capability for Pods in EKS to access the external network.

EKS supports using NAT gateways and route tables to allow services within the cluster to access the external network. For specific steps, please refer to the Tencent Cloud official documentation: Access the External Network via NAT Gateways.

During the setup process, please pay attention to the following two points:

  • In the step of Creating a Route Table That Points to the NAT Gateway, select the destination as: 0.0.0.0/0.
  • In the step of Associating Subnets with the Route Table, only associate the subnets selected when creating the EKS cluster.

If your databases need to be accessed via the external network, make sure that the cluster has successfully enabled the capability for internal services in the cluster to access the external network. Otherwise, the deployment of IAM applications may fail due to inaccessible databases.

Installing IAM Application #

Earlier, we enabled the image repository, installed the Docker engine, and installed and configured the Kubernetes cluster. Now, let’s see how to deploy the IAM application to the Kubernetes cluster.

Assuming the IAM project repository’s root directory is $IAM_ROOT, the specific installation steps are as follows:

  1. Configure scripts/install/environment.sh

The scripts/install/environment.sh file contains various custom configurations. You may need to configure the database-related configurations (or you can use the default values):

  • MariaDB configuration: Variables starting with MARIADB_ in the environment.sh file.
  • Redis configuration: Variables starting with REDIS_ in the environment.sh file.
  • MongoDB configuration: Variables starting with MONGO_ in the environment.sh file.

For other configurations, use the default values.

  1. Create the IAM application’s configuration file

    $ cd ${IAM_ROOT} $ make gen.defaultconfigs # Generate the default configuration files for iam-apiserver, iam-authz-server, iam-pump, and iamctl components $ make gen.ca # Generate the CA certificate

The above commands will store the IAM configuration files in the ${IAM_ROOT}/_output/configs/ directory.

  1. Create the IAM namespace

We will create all the resources related to the IAM application in the iam namespace. Creating the IAM resources in a separate namespace not only makes maintenance easier but also prevents interference with other Kubernetes resources.

$ kubectl create namespace iam
  1. Save the configuration files of the IAM services as ConfigMap resources in the Kubernetes cluster

    $ kubectl -n iam create configmap iam –from-file=${IAM_ROOT}/_output/configs/ $ kubectl -n iam get configmap iam NAME DATA AGE iam 4 13s

Executing the kubectl -n iam get configmap iam command should successfully retrieve the created iam configmap.

If you find it cumbersome to specify the -n iam option every time you execute a kubectl command, you can use the following command to set the namespace in the kubectl context. After setting it, executing kubectl commands will default to the iam namespace:

$ kubectl config set-context `kubectl config current-context` --namespace=iam
  1. Create the certificate files used by the IAM services as ConfigMap resources in the Kubernetes cluster

    $ kubectl -n iam create configmap iam-cert –from-file=${IAM_ROOT}/_output/cert $ kubectl -n iam get configmap iam-cert NAME DATA AGE iam-cert 14 12s

Executing the kubectl -n iam get configmap iam-cert command should successfully retrieve the created iam-cert configmap.

  1. Create the access key for the image repository

During the preparation phase, we enabled the Tencent Cloud image repository service (access address: ccr.ccs.tencentyun.com) and created a user 10000099xxxx with the password iam59!z$. Next, we can create the docker-registry secret. Kubernetes requires the docker-registry secret for authentication when downloading Docker images. The command to create it is as follows:

$ kubectl -n iam create secret docker-registry ccr-registry --docker-server=ccr.ccs.tencentyun.com --docker-username=10000099xxxx --docker-password='iam59!z$'
  1. Create Docker images and push them to the image registry

To push the images to the CCR image registry, make sure you are logged in to the Tencent Cloud CCR image registry. If you’re not logged in, you can execute the following command to log in:

$ docker login --username=[username] ccr.ccs.tencentyun.com

Execute the make push command to build the images and push them to the CCR image registry:

$ make push REGISTRY_PREFIX=ccr.ccs.tencentyun.com/marmotedu VERSION=v1.1.0

The above command will build four images: iam-apiserver-amd64, iam-authz-server-amd64, iam-pump-amd64, and iamctl-amd64, and push them to the marmotedu namespace in the Tencent Cloud image registry.

The built images will be as follows:

$ docker images|grep marmotedu
ccr.ccs.tencentyun.com/marmotedu/iam-pump-amd64           v1.1.0   e078d340e3fb        10 seconds ago      244MB
ccr.ccs.tencentyun.com/marmotedu/iam-apiserver-amd64      v1.1.0   5e90b67cc949        2 minutes ago       239MB
ccr.ccs.tencentyun.com/marmotedu/iam-authz-server-amd64   v1.1.0   6796b02be68c        2 minutes ago       238MB
ccr.ccs.tencentyun.com/marmotedu/iamctl-amd64             v1.1.0   320a77d525e3        2 minutes ago       235MB
  1. Modify the configuration in ${IAM_ROOT}/deployments/iam.yaml

Please note that if in the previous step, you built the images with a tag other than v1.1.0, you need to modify the ${IAM_ROOT}/deployments/iam.yaml file and change the tags of the iam-apiserver-amd64, iam-authz-server-amd64, iam-pump-amd64, and iamctl-amd64 images to the tag you specified when building the images.

  1. Deploy the IAM application
$ kubectl -n iam apply -f ${IAM_ROOT}/deployments/iam.yaml

By executing the above command, a series of Kubernetes resources will be created in the iam namespace. You can use the following command to check the status of these resources:

$ kubectl -n iam get all

We can see that the pod/iam-apiserver-d8dc48596-wkhpl, pod/iam-authz-server-6bc899c747-fbpbk, pod/iam-pump-7dcbfd4f59-2w9vk, and pod/iamctl-6fc46b8ccb-gs62l pods are all in the Running state, indicating that the services have started successfully.

Testing IAM Application #

We have created a test Deployment iamctl under the iam namespace. You can log in to the Pod created by the iamctl Deployment to perform some operational tasks and smoke tests. The login command is as follows:

$ kubectl -n iam exec -it `kubectl -n iam get pods -l app=iamctl | awk '/iamctl/{print $1}'` -- bash

Once logged into the iamctl-xxxxxxxxxx-xxxxx Pod, you can perform operational tasks and smoke tests.

  1. Operational tasks

In the iamctl container, you can use the various functionalities provided by the iamctl tool. iamctl provides functionalities as subcommands. The command execution result is shown in the following image:

Image

  1. Smoke tests
# cd /opt/iam/scripts/install
# ./test.sh iam::test::smoke

If the last line of the output from the ./test.sh iam::test::smoke command is the string congratulations, smoke test passed!, it indicates that the IAM application has been installed successfully. As shown in the following image:

Image

Destroy Serverless Cluster and its Resources #

Alright, at this point, you have successfully deployed the IAM application in the Serverless cluster, and the mission of Serverless is also completed. Next, to avoid continuous billing, you need to delete the resources and clusters within Serverless.

  1. Delete the IAM resources created in Serverless
$ kubectl delete namespace iam

Since deleting a namespace will remove all resources under the namespace, the above command may take some time to execute.

  1. Delete the Serverless cluster

Log in to the Tencent Cloud Container Service Console, select the Serverless cluster you created, and delete it.

Summary #

In the design of cloud-native architecture, IAM application needs to be deployed to a Kubernetes cluster. Therefore, the first step is to prepare a Kubernetes cluster. You can purchase Tencent Cloud CVM machines to build the Kubernetes cluster yourself, but this method is costly and complex to operate. So, I suggest you directly apply for a Serverless cluster to deploy the IAM application.

The Serverless cluster is a standard Kubernetes cluster that can be quickly applied for and requires no maintenance. The Serverless cluster only charges for the actual resource usage. The resource usage fee generated during the deployment of the IAM application in the learning process is actually very low, so it is recommended to use this method to deploy the IAM application.

With the Kubernetes cluster, you can directly deploy the entire IAM application using the following command:

$ kubectl -n iam apply -f ${IAM_ROOT}/deployments/iam.yaml

After the application is deployed, we can log in to the iamctl-xxxxxxxxxx-xxxxx Pod and execute the following command to test if the entire IAM application is successfully deployed:

# cd /opt/iam/scripts/install
# ./test.sh iam::test::smoke

Exercises #

  1. Give some thought to how MariaDB, MongoDB, and Redis can be containerized.
  2. Consider how to increase trust in the iam-apiserver service within the IAM application, and try updating this service.

Feel free to leave comments in the discussion area. See you in the next lesson.