40 Software Deployment Practice Introduction to Deployment Solutions and High Availability Components

40 Software Deployment Practice Introduction to Deployment Solutions and High Availability Components #

Hello, I am Kong Lingfei.

Next, we will enter the last module of this course, the service deployment section. In this module, I will guide you step by step to deploy a production-ready IAM application.

In lesson 03, we quickly deployed the IAM system on a single machine. However, such a system lacks high availability, elastic scaling, and other capabilities, making it fragile and prone to problems when facing traffic peaks or releasing changes. Before the system goes live, we need to readjust the deployment architecture to ensure that our system has core operations and maintenance capabilities such as load balancing, high availability, and elastic scaling.

Considering the limited system resources in your hands, this module will demonstrate how to deploy a relatively high availability IAM system as simply as possible. By following the deployment methods I will discuss, you can basically put a small to medium-sized system online.

In this module, I will introduce two deployment methods.

The first is the traditional deployment method, based on physical machines/virtual machines, where disaster recovery and elastic scaling capabilities need to be implemented by the deployment personnel themselves. The second is containerized deployment, based on Docker and Kubernetes, where disaster recovery and elastic scaling capabilities can be implemented using the built-in capabilities of Kubernetes.

In the next three lessons, let’s first take a look at the traditional deployment method, that is, how to deploy the IAM application based on virtual machines. Today, I will mainly discuss two components related to IAM deployment, Nginx and Keepalived.

Deployment Plan #

First, let’s take a look at our overall deployment plan.

Here, I will use Nginx + Keepalived to deploy a high-availability architecture, and all components will be deployed in the internal network to ensure the security and performance of the service.

The deployment requires two physical/virtual machines, and the components communicate with each other through the internal network. The required servers are shown in the following table:

Image

Both servers are Tencent Cloud CVMs, and the VIP (Virtual IP) is 10.0.4.99. The deployment architecture is shown in the following diagram:

Image

Let me explain the deployment architecture in the diagram. This deployment uses two CVM servers, one as the master and the other as the backup, sharing the same VIP. At any given time, the VIP is only active on one master device. When the master server fails, the backup server automatically takes over the VIP and continues to provide service.

The master server deploys iam-apiserver, iam-authz-server, iam-pump, and databases mongodb, redis, mysql. The backup server deploys iam-apiserver, iam-authz-server, and iam-pump. The components in the backup server access the database components in the master server via the internal IP 10.0.4.20.

Both the master and backup servers have Keepalived and Nginx installed. The high availability of the backend services iam-apiserver and iam-authz-server is achieved through Nginx’s reverse proxy and load balancing features, and the high availability of Nginx is achieved through Keepalived.

We bind the virtual IP with Tencent Cloud’s Elastic Public IP, which allows clients to access the internal Nginx server (443 port) via the public IP. If you want to access the internal Nginx server through a domain name, you can also apply for a domain name to point to the Elastic Public IP.

With the above deployment plan, we can achieve a high availability IAM system that possesses the following capabilities:

  • High performance: The IAM service can be horizontally scaled by leveraging Nginx’s load balancing feature, achieving high performance.
  • Disaster recovery capability: The high availability of IAM service is achieved through Nginx, and the high availability of Nginx is achieved through Keepalived, ensuring high availability of core components.
  • Horizontal scalability: IAM service can be horizontally scaled by utilizing Nginx’s load balancing feature.
  • High security: All components are deployed in the internal network, and clients can only access the Nginx service via VIP:443 port. By enabling TLS authentication and JWT authentication, the service achieves a higher level of security. As the servers are Tencent Cloud CVMs, server security can be further enhanced by leveraging Tencent Cloud’s capabilities, such as security groups, DDoS protection, host security protection, cloud monitoring, and cloud firewalls.

Here, it should be noted that in order to simplify the installation and configuration process of the IAM application and facilitate your hands-on practice, some capabilities, such as database high availability, process monitoring and alerting, automatic scaling, are not covered in this deployment plan. You can learn and master these capabilities in your future work.

Next, let’s take a look at the two core components used in this deployment plan, Nginx and Keepalived. I will introduce their installation and configuration methods to prepare you for the next lesson.

Nginx Installation and Configuration #

Introduction to Nginx Features #

Let’s start with a brief introduction to Nginx. Nginx is a lightweight, high-performance, open-source HTTP server and reverse proxy server. The IAM system uses Nginx for its reverse proxy and load balancing capabilities, which I will explain separately below.

Why do we need a reverse proxy? In a production environment, the network where services are deployed (intranet) is usually isolated from the external network (internet). This requires a server that can access both the intranet and the internet to act as an intermediary, and that server is known as a reverse proxy server. Nginx serves as a reverse proxy server, and a simple configuration is as follows:

server {
    listen      80;
    server_name iam.marmotedu.com;
    client_max_body_size 1024M;

    location / {
        proxy_set_header Host $http_host;
        proxy_set_header X-Forwarded-Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass  http://127.0.0.1:8080/;
        client_max_body_size 100m;
    }
}

Nginx’s reverse proxy functionality allows it to forward requests to different backend servers based on different configuration rules. For example, if we start Nginx with the above configuration on a server with the IP address x.x.x.x, when we access http://x.x.x.x:80/, the request will be forwarded to http://127.0.0.1:8080/. Here, listen 80 specifies the listening port of the Nginx server, and proxy_pass http://127.0.0.1:8080/ specifies the forwarding path.

Another commonly used feature of Nginx is layer 7 load balancing. Load balancing, in this context, refers to the process of distributing incoming HTTP requests to different backend servers based on a load balancing strategy. For example, when the IAM-API server is deployed on two servers A and B, Nginx will forward requests to the server with the lower load based on the load of servers A and B.

It is important to note that the IAM-API server should be stateless. Nginx provides various load balancing strategies to meet the needs of different scenarios.

Nginx Installation Steps #

Next, I will explain how to install and configure Nginx.

We will perform the following steps on servers 10.0.4.20 and 10.0.4.21, respectively, to install Nginx.

On the CentOS 8.x system, we can use the yum command to install Nginx. The installation process can be divided into the following four steps.

Step 1: Install Nginx:

$ sudo yum -y install nginx

Step 2: Confirm the successful installation of Nginx:

$ nginx -v
nginx version: nginx/1.14.1

Step 3: Start Nginx and set it to start on boot:

$ sudo systemctl start nginx
$ sudo systemctl enable nginx

By default, Nginx listens on port 80. Before starting Nginx, make sure that port 80 is not occupied. Of course, you can also modify the Nginx configuration file /etc/nginx/nginx.conf to change the listening port of Nginx.

Step 4: Check the status of Nginx startup:

$ systemctl status nginx

If the output contains the string active (running), it means that Nginx has started successfully. If Nginx fails to start, you can check the /var/log/nginx/error.log log file to pinpoint the cause of the error.

Install and Configure Keepalived #

Nginx has built-in load balancing functionality. When a backend server in Nginx fails, Nginx will automatically remove that server and forward requests to available servers, ensuring high availability of backend API services. However, Nginx itself is a single point of failure. If Nginx goes down, all the backend servers will become inaccessible. Therefore, in production environments, it is necessary to make Nginx highly available as well.

The most commonly used method in the industry to achieve high availability for Nginx is to use Keepalived. The Keepalived + Nginx high availability solution offers powerful service capabilities and simple maintenance.

Now let’s see how to install and configure Keepalived.

Installation Steps for Keepalived #

We will perform the following five steps on servers 10.0.4.20 and 10.0.4.21 to install Keepalived.

Step 1: Download the latest version of Keepalived (version 2.1.5 is currently installed for this course):

$ wget https://www.keepalived.org/software/keepalived-2.1.5.tar.gz

Step 2: Install Keepalived:

$ sudo yum -y install openssl-devel # Keepalived depends on OpenSSL, so install the dependencies first
$ tar -xvzf keepalived-2.1.5.tar.gz
$ cd keepalived-2.1.5
$ ./configure --prefix=/usr/local/keepalived
$ make
$ sudo make install

Step 3: Configure Keepalived:

$ sudo mkdir /etc/keepalived # After installation, /etc/keepalived directory is not created by default
$ sudo cp /usr/local/keepalived/etc/keepalived/keepalived.conf  /etc/keepalived/keepalived.conf
$ sudo cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/keepalived

The systemd unit configuration for Keepalived uses /usr/local/keepalived/etc/sysconfig/keepalived as its EnvironmentFile. We need to change it to /etc/sysconfig/keepalived. Edit the file /lib/systemd/system/keepalived.service and set the EnvironmentFile as follows:

EnvironmentFile=-/etc/sysconfig/keepalived

Step 4: Start Keepalived and set it to start on boot:

$ sudo systemctl start keepalived
$ sudo systemctl enable keepalived

Note that Keepalived does not validate the configuration file on startup, so be careful when modifying the configuration to avoid unexpected issues.

Step 5: Check the status of Keepalived:

$ systemctl status keepalived

If the output contains the string active (running), it means Keepalived has been successfully started. The Keepalived logs are stored in /var/log/messages and can be viewed if needed.

Keepalived Configuration File Analysis #

The default configuration file for Keepalived is /etc/keepalived/keepalived.conf. Here is an example Keepalived configuration:

# Global definition, defining global configuration options
global_defs {
    # Specify the email addresses to which keepalived should send emails when a switching operation occurs
    # It is recommended to send emails in keepalived_notify.sh
    notification_email {
        [[email protected]](/cdn-cgi/l/email-protection)
    }
    notification_email_from [[email protected]](/cdn-cgi/l/email-protection) # Email source address when sending emails
    smtp_server 192.168.200.1 # SMTP server address when sending emails
    smtp_connect_timeout 30 # Timeout for connecting to SMTP, in seconds
    router_id VM-4-21-centos # Machine identifier, usually set to hostname
    vrrp_skip_check_adv_addr # Skip check if the received packet and the previous packet are from the same router, by default
    vrrp_garp_interval 0 # Time delay, in seconds, between groups of gratuitous ARP messages on a network interface, default is 0
    vrrp_gna_interval 0 # Time delay, in seconds, between groups of NA messages on a network interface, default is 0
}
# Configuration for check script
vrrp_script checkhaproxy
{
    script "/etc/keepalived/check_nginx.sh" # Path to the detection script
    interval 5 # Detection interval (seconds)
    weight 0 # Change priority based on this weight, when the value is 0, the priority of the instance is not changed
}
# VRRP Instance Configuration
vrrp_instance VI_1 {
    state BACKUP # Set the initial state to 'backup'
    interface eth0 # Set the network card binding the VIP, such as eth0
    virtual_router_id 51 # Configure the cluster VRID, the VRID of the primary and backup nodes needs to be the same
    nopreempt # Set to non-preempt mode, can only be set on nodes with state 'backup'
    priority 50 # Set the priority, value range is 0 to 254, higher values ​​have higher priority, with the highest being the master
    advert_int 1 # Multicast message sending interval, two nodes must set the same value, default is 1 second
    # Verification information, two nodes must be consistent
    authentication {
        auth_type PASS # Authentication method, can be either PASS or AH
        auth_pass 1111 # Authentication password
    }
    unicast_src_ip 10.0.4.21 # Set the local intranet IP address
    unicast_peer {
        10.0.4.20 # IP address of the peer device
    }
    # VIP, add when state is master, delete when state is backup
    virtual_ipaddress {
        10.0.4.99 # Set the highly available virtual VIP, if it is a Tencent Cloud CVM, fill in the HAVIP address obtained in the console.
    }
    notify_master "/etc/keepalived/keepalived_notify.sh MASTER" # Execute the script when switching to the master state
    notify_backup "/etc/keepalived/keepalived_notify.sh BACKUP" # Execute the script when switching to the backup state
    notify_fault "/etc/keepalived/keepalived_notify.sh FAULT" # Execute the script when switching to the fault state
    notify_stop "/etc/keepalived/keepalived_notify.sh STOP" # Execute the script when switching to the stop state
    garp_master_delay 1 # Set how long to update the ARP cache after switching to the master state
    garp_master_refresh 5 # Set the interval for the primary node to send ARP packets
    # Track interface, if any of the network cards encounters a problem, it will enter the fault (FAULT) state
    track_interface {
        eth0
    }
    # Check script to be executed
    track_script {
        checkhaproxy
    }
}

Summary #

Today, I mainly talked about two components related to IAM deployment: Nginx + Keepalived and their functionalities.

We can deploy the IAM application based on physical machines or virtual machines. When deploying the IAM application, it is necessary to ensure that the entire application has high availability and elastic scaling capabilities. You can achieve high availability for the backend services iam-apiserver and iam-authz-server through Nginx’s reverse proxy and load balancing functionalities. Keepalived can be used to achieve high availability for Nginx. By combining Nginx and Keepalived, we can achieve high availability and elastic scaling capabilities for the IAM application.

Exercise #

  1. The primary and backup servers of Keepalived should be connected to the same switch. Think about how to achieve high availability of the entire system if the switch fails.
  2. iam-pump is a stateful service. Think about how to achieve high availability of iam-pump.

Feel free to leave a comment in the message area to discuss with me. See you in the next lesson.