41 Software Deployment Practice Deployment in the IAM System Production Environment #
Hello, I’m Kong Lingfei.
In the previous lecture, I introduced two core components used in IAM deployment, Nginx and Keepalived. In this lecture, let’s take a look at how to use Nginx and Keepalived to deploy a high availability IAM application. In the next lecture, I will introduce the building methods for IAM application security and elastic scalability.
In this lecture, we will deploy the IAM application through the following four steps:
- Deploy the services within the IAM application on the server.
- Configure Nginx to implement reverse proxy functionality. By using reverse proxy, we can access the IAM services deployed in the intranet through Nginx.
- Configure Nginx to implement load balancing functionality. With load balancing, we can achieve horizontal scaling of services and make the IAM application highly available.
- Configure Keepalived to achieve high availability of Nginx. The combination of Nginx and Keepalived can achieve high availability for the entire application architecture.
Deploying IAM Application #
To deploy a highly available IAM application, at least two nodes are required. Therefore, we will deploy the IAM application on the 10.0.4.20
and 10.0.4.21
servers in sequential order.
Deploying IAM Application on 10.0.4.20
Server
#
First, let me explain how to deploy the IAM application on the 10.0.4.20
server.
We need to deploy the following components on this server:
- iam-apiserver
- iam-authz-server
- iam-pump
- MariaDB
- Redis
- MongoDB
The deployment methods for these components were discussed in Lesson 03, so I won’t go into detail here.
In addition, we also need to configure MariaDB and grant database connection authorization to the 10.0.4.21
server. The authorization command is as follows:
$ mysql -hlocalhost -P3306 -uroot -proot # Log in to the database as the root user first
MariaDB [(none)]> grant all on iam.* TO 'iam'@'10.0.4.21' identified by 'iam1234';
Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.000 sec)
Deploying IAM Application on 10.0.4.21
Server
#
Next, install iam-apiserver, iam-authz-server, and iam-pump on the 10.0.4.21
server. These components will connect to MariaDB, Redis, and MongoDB on the 10.0.4.20
server using the 10.0.4.20
IP address.
Configure Nginx as reverse proxy #
Assuming that the domain names for the API Server and IAM Authorization Server are iam.api.marmotedu.com
and iam.authz.marmotedu.com
respectively, we need to configure Nginx reverse proxy for iam-apiserver and iam-authz-server.
The entire configuration process can be divided into 5 steps (to be performed on the 10.0.4.20
server).
Step 1: Configure iam-apiserver.
Create a new Nginx configuration file /etc/nginx/conf.d/iam-apiserver.conf
with the following content:
server {
listen 80;
server_name iam.api.marmotedu.com;
root /usr/share/nginx/html;
location / {
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080/;
client_max_body_size 5m;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Here are a few things to note while configuring:
server_name
should beiam.api.marmotedu.com
, as we will be accessing iam-apiserver throughiam.api.marmotedu.com
.- The default port for iam-apiserver is
8080
. - The maximum allowed size for a single file requested by the client is set to
5MB
via the settingclient_max_body_size 5m
. In actual production environments, this value may need to be adjusted to accommodate larger files such as images, for example50m
. server_name
is used to indicate the domain name used to access the Nginx server. For example,curl -H 'Host: iam.api.marmotedu.com' http://x.x.x.x:80/healthz
, wherex.x.x.x
is the IP address of the Nginx server.proxy_pass
specifies the reverse proxy path. In this case, the iam-apiserver service is running on the local machine, so the IP is set to127.0.0.1
. The port should be consistent with the API service port, which is8080
.
Finally, please note that since the Nginx configuration options are extensive and depend on specific requirements and environment, the provided configuration is basic and should be further adjusted for actual production usage.
Step 2: Configure iam-authz-server.
Create a new Nginx configuration file /etc/nginx/conf.d/iam-authz-server.conf
with the following content:
server {
listen 80;
server_name iam.authz.marmotedu.com;
root /usr/share/nginx/html;
location / {
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9090/;
client_max_body_size 5m;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Here are some additional configuration notes:
server_name
should beiam.authz.marmotedu.com
, as we will be accessing iam-authz-server throughiam.authz.marmotedu.com
.- The default port for iam-authz-server is
9090
. - Other configurations are the same as in
/etc/nginx/conf.d/iam-apiserver.conf
.
Step 3: Restart Nginx after the configuration is complete:
$ sudo systemctl restart nginx
Step 4: Append the following two lines to /etc/hosts
:
127.0.0.1 iam.api.marmotedu.com
127.0.0.1 iam.authz.marmotedu.com
Step 5: Send HTTP requests:
$ curl http://iam.api.marmotedu.com/healthz
{"status":"ok"}
$ curl http://iam.authz.marmotedu.com/healthz
{"status":"ok"}
We are making a health check request to iam-apiserver and iam-authz-server respectively, and receiving {"status":"ok"}
as the output. This indicates that we have successfully accessed the backend API services through the proxy.
After making a request to http://iam.api.marmotedu.com/healthz
using curl, the actual request flow to the backend is as follows:
- Since
127.0.0.1 iam.api.marmotedu.com
is configured in/etc/hosts
, the requesthttp://iam.api.marmotedu.com/healthz
is actually made to the Nginx port on the local machine (127.0.0.1:80
). - Once Nginx receives the request, it interprets the request and determines that the domain name is
iam.api.marmotedu.com
. It then matches this domain name to the Nginx server configuration, identifying the configuration withserver_name iam.api.marmotedu.com;
. - Once the server is matched, the request is forwarded to the
proxy_pass
path of this server. - Nginx waits for the API server to return the result and then forwards it back to the client.
Configuring Nginx as a Load Balancer #
This course uses Nginx with round-robin load balancing strategy to forward requests. Load balancing requires at least two servers, so we will perform the same operations on both the 10.0.4.20
and 10.0.4.21
servers. Below, I will explain how to configure these two servers separately and verify the configuration.
Configuration on 10.0.4.20
Server
#
Login to the 10.0.4.20
server and add upstream configuration in /etc/nginx/nginx.conf
. The configuration process can be divided into three steps.
Step 1: Add upstream in /etc/nginx/nginx.conf
:
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
upstream iam.api.marmotedu.com {
server 127.0.0.1:8080;
server 10.0.4.21:8080;
}
upstream iam.authz.marmotedu.com {
server 127.0.0.1:9090;
server 10.0.4.21:9090;
}
}
Configuration Explanation:
- The upstream configuration is added in the
http{ ... }
section of the/etc/nginx/nginx.conf
file. - We create two upstream configurations for the
iam.api.marmotedu.com
andiam.authz.marmotedu.com
servers, as we need to configure load balancing separately for the iam-apiserver and iam-authz-server. It is recommended to keep the upstream names consistent with the domain names for easy identification. - In the upstream section, we add all the backends (ip:port) for the iam-apiserver and iam-authz-server. For faster access to the backend on the local machine, we can use
127.0.0.1:<port>
. For other machines, we can use<internal_ip>:<port>
, for example,10.0.4.21:8080
,10.0.4.21:9090
, etc.
Step 2: Modify proxy_pass.
Modify the proxy_pass
in the /etc/nginx/conf.d/iam-apiserver.conf
file to:
proxy_pass http://iam.api.marmotedu.com/;
Modify the proxy_pass
in the /etc/nginx/conf.d/iam-authz-server.conf
file to:
proxy_pass http://iam.authz.marmotedu.com/;
When Nginx forwards requests to the http://iam.api.marmotedu.com/
domain, it will select a backend from the backend list configured in the iam.api.marmotedu.com
upstream, based on the load balancing strategy, and forward the request to it. The same logic applies when forwarding requests to the http://iam.authz.marmotedu.com/
domain.
Step 3: After configuring Nginx, restart it:
$ sudo systemctl restart nginx
The final configurations can be found in the following files (saved in the configs/ha/10.0.4.20 directory):
- nginx.conf: configs/ha/10.0.4.20/nginx.conf
- iam-apiserver.conf: configs/ha/10.0.4.20/iam-apiserver.conf
- iam-authz-server.conf: configs/ha/10.0.4.20/iam-authz-server.conf
Configuration on 10.0.4.21
Server
#
Login to the 10.0.4.21
server and add upstream configuration in /etc/nginx/nginx.conf
. The configuration process can be divided into four steps.
Step 1: Add upstream in /etc/nginx/nginx.conf
:
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
upstream iam.api.marmotedu.com {
server 127.0.0.1:8080;
}
upstream iam.authz.marmotedu.com {
server 127.0.0.1:9090;
}
}
Configuration Explanation:
- The upstream configuration is added in the
http{ ... }
section of the/etc/nginx/nginx.conf
file. - We create two upstream configurations for the
iam.api.marmotedu.com
andiam.authz.marmotedu.com
servers, as we need to configure load balancing separately for the iam-apiserver and iam-authz-server. It is recommended to keep the upstream names consistent with the domain names for easy identification. - In the upstream section, we add the backend (ip:port) for the local machine, which can be accessed faster by using
127.0.0.1:<port>
.
Step 2: Modify proxy_pass.
Modify the proxy_pass
in the /etc/nginx/conf.d/iam-apiserver.conf
file to:
proxy_pass http://iam.api.marmotedu.com/;
Modify the proxy_pass
in the /etc/nginx/conf.d/iam-authz-server.conf
file to:
proxy_pass http://iam.authz.marmotedu.com/;
When Nginx forwards requests to the http://iam.api.marmotedu.com/
domain, it will select the backend from the backend list configured in the iam.api.marmotedu.com
upstream and forward the request to it. The same logic applies when forwarding requests to the http://iam.authz.marmotedu.com/
domain.
Step 3: After configuring Nginx, restart it:
$ sudo systemctl restart nginx
The final configurations can be found in the following files (saved in the configs/ha/10.0.4.20 directory):
- nginx.conf: configs/ha/10.0.4.20/nginx.conf
- iam-apiserver.conf: configs/ha/10.0.4.20/iam-apiserver.conf
- iam-authz-server.conf: configs/ha/10.0.4.20/iam-authz-server.conf server 10.0.4.20:8080 } upstream iam.authz.marmotedu.com { server 127.0.0.1:9090 server 10.0.4.20:9090 } }
In the upstream section, you need to configure the backends for iam-apiserver and iam-authz-server on the server 10.0.4.20, for example, 10.0.4.20:8080 and 10.0.4.20:9090.
Step 2: Create the /etc/nginx/conf.d/iam-apiserver.conf
file (reverse proxy + load balancing configuration for iam-apiserver) with the following content:
server {
listen 80;
server_name iam.api.marmotedu.com;
root /usr/share/nginx/html;
location / {
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://iam.api.marmotedu.com/;
client_max_body_size 5m;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Step 3: Create the /etc/nginx/conf.d/iam-authz-server
file (reverse proxy + load balancing configuration for iam-authz-server) with the following content:
server {
listen 80;
server_name iam.authz.marmotedu.com;
root /usr/share/nginx/html;
location / {
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://iam.authz.marmotedu.com/;
client_max_body_size 5m;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Step 4: After configuring Nginx, restart it:
$ sudo systemctl restart nginx
The final configuration files that you can refer to are saved in the following directory: configs/ha/10.0.4.21:
- nginx.conf: configs/ha/10.0.4.21/nginx.conf.
- iam-apiserver.conf: configs/ha/10.0.4.21/iam-apiserver.conf.
- iam-authz-server.conf: configs/ha/10.0.4.21/iam-authz-server.conf.
Test Load Balancing #
We have configured the Nginx load balancer above, and now we need to test if it is configured successfully.
Step 1: Run the test script (test/nginx/loadbalance.sh
) as follows:
#!/usr/bin/env bash
for domain in iam.api.marmotedu.com iam.authz.marmotedu.com
do
for n in $(seq 1 1 10)
do
echo $domain
nohup curl http://${domain}/healthz &>/dev/null &
done
done
Step 2: Check the logs of iam-apiserver and iam-authz-server respectively.
Here I will show you the logs for iam-apiserver (you can check the logs for iam-authz-server on your own).
The log for iam-apiserver on the server 10.0.4.20 is shown in the following image:
The log for iam-apiserver on the server 10.0.4.21 is shown in the following image:
From the two images above, you can see that both 10.0.4.20 and 10.0.4.21 received 5 /healthz requests, indicating that the load balancing configuration is successful.
Configure Keepalived #
In Chapter 40, we installed Keepalived on the servers 10.0.4.20
and 10.0.4.21
. Now, I’ll introduce how to configure Keepalived to achieve high availability for Nginx. In order to avoid service latency caused by VIP switching during failure recovery, this article adopts the non-preemptive mode of Keepalived.
The configuration process of Keepalived is relatively complex and can be divided into six major steps: creating Tencent Cloud HAVIP, configuring the master server, configuring the backup server, testing Keepalived, binding the VIP with a public IP, and testing public access. Each step contains many small steps. Let’s go through them one by one below.
Step 1: Create Tencent Cloud HAVIP #
Due to security considerations (such as avoiding ARP spoofing), public cloud vendors’ ordinary intranet IPs do not support hosts announcing IP addresses via ARP. If a user directly specifies an ordinary intranet IP as a virtual IP in the keepalived.conf
file, when Keepalived switches the virtual IP from the MASTER machine to the BACKUP machine, it will not be able to update the IP-to-MAC address mapping and an API call will be required to switch IP addresses. Therefore, the VIP here needs to be applied for using Tencent Cloud’s HAVIP.
The application process can be divided into the following 4 steps:
- Log in to the VPC console.
- In the left navigation pane, choose IP and NIC > High Availability Virtual IP.
- On the HAVIP management page, select the region and click Apply.
- In the Apply for a High Availability Virtual IP dialog box that appears, enter the name, select the private network and subnet where the HAVIP is located, and click OK.
The private network and subnet selected here need to be the same as 10.0.4.20
and 10.0.4.21
. The IP address of the HAVIP can be automatically assigned or manually entered. Here, we manually enter it as 10.0.4.99. The application page is shown in the following figure:
Step 2: Configuration of the master server #
Configuring the master server can be divided into two steps.
First, modify the Keepalived configuration file.
Login to the server 10.0.4.20
and edit /etc/keepalived/keepalived.conf
. Modify the configuration, and the modified configuration content is as follows (reference: configs/ha/10.0.4.20/keepalived.conf):
# Global definition, defines global configuration options
global_defs {
# Specify the email address to which Keepalived sends an email when switching occurs
# It is recommended to send an email in keepalived_notify.sh
notification_email {
[[email protected]](/cdn-cgi/l/email-protection)
}
notification_email_from [[email protected]](/cdn-cgi/l/email-protection) # Email source address when sending emails
smtp_server 192.168.200.1 # SMTP server address when sending emails
smtp_connect_timeout 30 # Timeout for connecting to the SMTP server
router_id VM-4-20-centos # Machine identifier, usually set as hostname
vrrp_skip_check_adv_addr # If the received packet is from the same router as the previous packet, the check will be skipped. The default is skip check.
vrrp_garp_interval 0 # The delay time between each group of gratuitous ARP messages on a network card, default is 0 seconds
vrrp_gna_interval 0 # The delay time between each group of NA messages on a network card, default is 0 seconds
}
# Detection script configuration
vrrp_script checkhaproxy {
script "/etc/keepalived/check_nginx.sh" # Path to the detection script
interval 5 # Detection interval (seconds)
weight 0 # Change priority based on this weight value, when the value is 0, the priority of the instance is not changed
}
# VRRP instance configuration
vrrp_instance VI_1 {
state BACKUP # Set the initial state to 'backup'
interface eth0 # Set the network card that binds the VIP, such as eth0
virtual_router_id 51 # Configure the VRID of the cluster. The VRID for backup must be the same as the VRID for the master
nopreempt # Set to the non-preemptive mode, which can only be set on the node with the state being backup
priority 100 # Set the priority, value range is from 0 to 254, the larger the value, the higher the priority, and the highest is the master
advert_int 1 # Multicast information sending interval, the two nodes must be set to the same value, default is 1 second
# Verification information, both nodes must be consistent
authentication {
auth_type PASS # Authentication method, can be either PASS or AH authentication
auth_pass 1111 # Authentication password
}
unicast_src_ip 10.0.4.20 # Set the intranet IP address of the local machine
unicast_peer {
10.0.4.21 # IP address of the peer device
}
}
# VIP, add when state is master, delete when state is backup
virtual_ipaddress {
10.0.4.99 # Set the highly available virtual VIP. If it is a Tencent Cloud CVM, fill in the HAVIP address obtained from the console.
}
notify_master "/etc/keepalived/keepalived_notify.sh MASTER" # Execute the script when switching to the master state
notify_backup "/etc/keepalived/keepalived_notify.sh BACKUP" # Execute the script when switching to the backup state
notify_fault "/etc/keepalived/keepalived_notify.sh FAULT" # Execute the script when switching to the fault state
notify_stop "/etc/keepalived/keepalived_notify.sh STOP" # Execute the script when switching to the stop state
garp_master_delay 1 # Set how long it takes to update the ARP cache when switching to the master state
garp_master_refresh 5 # Set the interval for the master node to send ARP messages
# Monitor the interfaces, if any of the network cards have problems, it will enter the fault (FAULT) state
track_interface {
eth0
}
# The check script to be executed
track_script {
checkhaproxy
}
}
A few things to note here:
- Make sure that the garp-related parameters are configured. Keepalived relies on ARP packets to update IP information. If these parameters are missing, it can cause the main device to not send ARP, leading to communication issues. The garp-related parameters are configured as follows:
garp_master_delay 1
garp_master_refresh 5
- Make sure that strict mode is not enabled, which means that the vrrp_strict configuration should be removed.
- The
/etc/keepalived/check_nginx.sh
and/etc/keepalived/keepalived_notify.sh
script files in the configuration can be copied from scripts/check_nginx.sh and scripts/keepalived_notify.sh respectively.
Then, restart Keepalived:
$ sudo systemctl restart keepalived
Step 3: Backup Server Configuration #
The backup server configuration is also divided into two steps.
First, modify the Keepalived configuration file.
Log in to the server 10.0.4.21
, edit /etc/keepalived/keepalived.conf
, and modify the configuration. The modified configuration is as follows (refer to: configs/ha/10.0.4.21/keepalived.conf):
# Global definition, define global configuration options
global_defs {
# Specify the email addresses to which Keepalived sends emails when a switch operation occurs.
# It is recommended to send emails in keepalived_notify.sh.
notification_email {
[[email protected]](/cdn-cgi/l/email-protection)
}
notification_email_from [[email protected]](/cdn-cgi/l/email-protection) # Email source address when sending email
smtp_server 192.168.200.1 # SMTP server address when sending email
smtp_connect_timeout 30 # Timeout for connecting to SMTP
router_id VM-4-21-centos # Machine identifier, usually can be set to the hostname
vrrp_skip_check_adv_addr # Skip checking if the received packet and the previous packet come from the same router. By default, the check is skipped.
vrrp_garp_interval 0 # The delay time between each group of gratuitous ARP messages on a network card, in seconds. The default is 0.
vrrp_gna_interval 0 # The delay time between each group of NA messages on a network card, in seconds. The default is 0.
}
# Check script configuration
vrrp_script checkhaproxy {
script "/etc/keepalived/check_nginx.sh" # Path to the check script
interval 5 # Check interval (in seconds)
weight 0 # Change priority based on this weight. When the value is 0, the priority of the instance is not changed.
}
# VRRP instance configuration
vrrp_instance VI_1 {
state BACKUP # Set the initial state to 'BACKUP'
interface eth0 # Set the network card that is bound to the VIP, for example eth0
virtual_router_id 51 # Configure the cluster VRID, the VRID of the master and backup must be the same
nopreempt # Set non-preempt mode, can only be set on nodes with state 'BACKUP'
priority 50 # Set the priority, value range 0~254, higher value means higher priority, the highest value is master
advert_int 1 # Multicast message send interval, two nodes must set the same, default is 1 second
# Authentication information, two nodes must be consistent
authentication {
auth_type PASS # Authentication method, can be either PASS or AH authentication
auth_pass 1111 # Authentication password
}
unicast_src_ip 10.0.4.21 # Set the local private network IP address
unicast_peer {
10.0.4.20 # IP address of the peer device
}
# VIP, add when state is master, remove when state is backup
virtual_ipaddress {
10.0.4.99 # Set the high availability virtual VIP, if it is a Tencent Cloud CVM, fill in the HAVIP address obtained from the console.
}
notify_master "/etc/keepalived/keepalived_notify.sh MASTER" # Execute the script when switching to the master state
notify_backup "/etc/keepalived/keepalived_notify.sh BACKUP" # Execute the script when switching to the backup state
notify_fault "/etc/keepalived/keepalived_notify.sh FAULT" # Execute the script when switching to the fault state
notify_stop "/etc/keepalived/keepalived_notify.sh STOP" # Execute the script when switching to the stop state
garp_master_delay 1 # Set how long to update the ARP cache after becoming the master state
garp_master_refresh 5 # Set the interval for the master node to send ARP packets
# Track interface, enter fault (FAULT) state if any of the network cards in it have problems
track_interface {
eth0
}
# Check script to be executed
track_script {
checkhaproxy
}
}
Then, restart Keepalived:
$ sudo systemctl restart keepalived
Step 4: Test Keepalived #
In the above configuration, 10.0.4.20
has a higher priority, so under normal circumstances, 10.0.4.20
will be selected as the master node, as shown in the following figure:
Next, let’s simulate some failure scenarios and see if the configuration takes effect.
Scenario 1: Keepalived failure
Execute sudo systemctl stop keepalived
on the 10.0.4.20
server to simulate a Keepalived failure. Check the VIP, as shown in the following figure:
As you can see, the VIP has been migrated from the 10.0.4.20
server to the 10.0.4.21
server. Check /var/log/keepalived.log
, you can see the following line has been added on the 10.0.4.20
server:
[2020-10-14 14:01:51] notify_stop
The following log has been added on the 10.0.4.21
server:
[2020-10-14 14:01:52] notify_master
Scenario 2: Nginx Failure
On servers 10.0.4.20
and 10.0.4.21
, execute sudo systemctl restart keepalived
to float the VIP to server 10.0.4.20
.
On server 10.0.4.20
, execute sudo systemctl stop nginx
to simulate an Nginx failure. Check the VIP as shown in the image below:
As you can see, the VIP has been floated from server 10.0.4.20
to server 10.0.4.21
. Check /var/log/keepalived.log
to see that server 10.0.4.20
has added the following log entry:
[2020-10-14 14:02:34] notify_fault
Server 10.0.4.21
has added the following log entry:
[2020-10-14 14:02:35] notify_master
Scenario 3: Nginx Recovery
Based on Scenario 2, execute sudo systemctl start nginx
on server 10.0.4.20
to recover Nginx. Check the VIP as shown in the image below:
As you can see, the VIP remains on server 10.0.4.21
and is not preempted by 10.0.4.20
. Check /var/log/keepalived.log
to see that server 10.0.4.20
has added the following log entry:
[2020-10-14 14:03:44] notify_backup
Server 10.0.4.21
has not added any new logs.
Step 5: Binding the VIP with a Public IP #
By this point, we have successfully configured a high availability solution using Keepalived + Nginx. However, our VIP is internal and cannot be accessed from the external network. In this case, we need to bind the VIP with a public IP to enable external access. In Tencent Cloud, you can achieve this by binding an elastic public IP. First, apply for a public IP and then bind the VIP with the elastic public IP. Let me explain the specific steps.
Applying for a public IP:
- Log in to the Private Network Console.
- In the left navigation pane, choose IP & Network > Elastic IP.
- On the Elastic IP Management page, choose the region and click Apply.
Binding the VIP with an elastic public IP:
- Log in to the Private Network Console.
- In the left navigation pane, choose IP & Network > High Availability Virtual.
- Click Bind for the HAVIP that you want to bind.
- In the pop-up window, select the public IP that you want to bind, as shown in the following image:
The bound elastic public IP is 106.52.252.139
.
Here’s a reminder: In the Tencent Cloud platform, if the HAVIP is not bound to an instance, the bound EIP will be in idle status and will be charged idle fees at a rate of ¥0.2/hour. Therefore, you need to correctly configure the high availability application to ensure successful binding.
Step 6: Testing Public Network Access #
Finally, you can test by executing the following command:
$ curl -H"Host: iam.api.marmotedu.com" http://106.52.252.139/healthz -H"iam.api.marmotedu.com"
{"status":"ok"}
As you can see, we can successfully access the high availability service at the backend via the public network. With this, we have successfully deployed a highly available IAM application.
Summary #
Today, I mainly talked about how to use Nginx and Keepalived to deploy a high availability IAM application.
To deploy a high availability IAM application, we need at least two servers, and deploy the same services iam-apiserver, iam-authz-server, iam-pump on each server. Additionally, choose one of the servers to deploy the database services: MariaDB, Redis, MongoDB.
For security and performance reasons, the iam-apiserver, iam-authz-server, and iam-pump services are accessed through the internal network to access the database services. In this talk, I also introduced how to configure Nginx for load balancing and how to configure Keepalived for high availability of Nginx.
Homework #
- Take some time to think about how to scale the iam-apiserver when needed, considering the current deployment architecture.
- Think about how to implement an alert function to notify system administrators when there is a VIP switch.
Feel free to leave a message in the comment section to discuss and exchange ideas. See you in the next class.