Skip to main content
Skip table of contents

High Availability using External Load Balancer

You may use any external load balancer for high availability of your Kubernetes cluster. This document covers configuration of a NGINX as an external load balancer for an RKE2 Kubernetes cluster.

Deployment Prerequisites

Node Required

vCPU

vRAM

vDisk (GiB)

Comments

RKE2

3 Control Plane nodes

2

4

50

See RKE2 installation requirements for hardware sizing, the underlying operating system, and the networking requirements.

CX-Core

2 Worker nodes

2

4

250

If Cloud Native Storage is not available, then 2 worker nodes are required on both site-A and site-B.

However, if CloudNative Storage is accessible from both sites, 1 worker node can sustain workload on each site.

Superset

1 Worker node

2

8

250

For reporting

Load Balancer Hardware Requirements

Type

RAM (GiB) 

vCPU

DISK (GiB)

Scalability 

Network  Ports

Minimum Nodes

Load-Balancer

2

1

100

Single-Node

6443, 9345,80,443 to all CP/Worker Nodes nodes

1

Load Balancer without HA is single point of failure in the cluster setup and customers are required to setup either of above in a failover cluster.

Configuration Steps

1. DNS configurations

In your DNS, map your FQDN to an A record or a CNAME pointing pointing to the load balancer IP or hostname. Given below is a sample configuration for NGINX as an ELB.

2. Deploy an ELB


3. Deploy the cluster in HA using LoadBalancer

3.1. Deploy RKE2 on first control plane

  1. Create a deployment manifest called config.yaml for RKE2 cluster and replace the IP address.

Assuming that the Load balancer is running on 1.1.1.1 with the FQDN cx.customer-x.com.

YAML
cat<<EOF|tee /etc/rancher/rke2/config.yaml
tls-san:
  - cx.customer-x.com
  - 1.1.1.1
write-kubeconfig-mode: "0600"
etcd-expose-metrics: true
cni:
  - canal

EOF
  1. For the first control plane setup, install RKE2 RKE2 Control plane Deployment

  2. Retrieve the joining token from the control plane that you need to use for installing the remaining control plane nodes.

    BASH
    cat /var/lib/rancher/rke2/server/node-token

3.2. Deploy RKE2 on remaining control plane nodes

  1. Create a deployment manifest called config.yaml 

Assuming that the Load balancer is running on 1.1.1.1 with the FQDN cx.customer-x.com.

YAML
cat<<EOF|tee /etc/rancher/rke2/config.yaml
server: https://1.1.1.1:9345
# speicfy the token as retrieved in the first control plane deployment
token: [token-string]
tls-san:
  - cx.customer-x.com
write-kubeconfig-mode: "0644"
etcd-expose-metrics: true
cni:
  - canal

EOF
  1. Install RKE2 RKE2 Control plane Deployment on all the remaining control plane nodes.

4. Deploy Worker Nodes

At this point, you have RKE2 Kubernetes cluster ready using load balancer.

5. Verify the cluster setup

On the control-plane node run the following command to verify that the worker(s) have been added.

CODE
kubectl get nodes -o wide

Next Steps

  1. Choose and install a cloud native storage. See Storage Solution - Getting Started for choosing the right storage solution for your deployment.

  2. Deploy Expertflow CX following Expertflow CX Deployment on Kubernetes





JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.