Skip to main content
Skip table of contents

High Availability using External Load Balancer

You may use any external load balancer for high availability of your Kubernetes cluster. This document covers configuration of a NGINX as an external load balancer for an RKE2 Kubernetes cluster.

Prerequisites

Type

RAM (GiB) 

vCPU

DISK (GiB)

Scalability 

Network  Ports

Minimum Nodes

Load-Balancer

2

1

100

Single-Node

6443, 9345,80,443 to all CP/Worker Nodes nodes

1

Load Balancer without HA is single point of failure in the cluster setup and customers are required to setup either of above in a failover cluster.

Configuration Steps

1. DNS configurations

In your DNS, map your FQDN to an A record or a CNAME pointing pointing to the load balancer IP or hostname. Given below is a sample configuration for NGINX as an ELB.

2. Deploy an ELB


3. Deploy the cluster in HA using LoadBalancer

3.1. Deploy RKE2 on first control plane

  1. Create a deployment manifest called config.yaml for RKE2 cluster and replace the IP address.

Assuming that the Load balancer is running on 1.1.1.1 with the FQDN cx.customer-x.com.

YAML
cat<<EOF|tee /etc/rancher/rke2/config.yaml
tls-san:
  - cx.customer-x.com
  - 1.1.1.1
write-kubeconfig-mode: "0600"
etcd-expose-metrics: true
cni:
  - canal

EOF
  1. For the first control plane setup, install RKE2 RKE2 Control-plane Deployment

  2. Retrieve the joining token from the control plane that you need to use for installing the remaining control plane nodes.

    BASH
    cat /var/lib/rancher/rke2/server/node-token

3.2. Deploy RKE2 on remaining control plane nodes

  1. Create a deployment manifest called config.yaml 

Assuming that the Load balancer is running on 1.1.1.1 with the FQDN cx.customer-x.com.

YAML
cat<<EOF|tee /etc/rancher/rke2/config.yaml
server: https://1.1.1.1:9345
# speicfy the token as retrieved in the first control plane deployment
token: [token-string]
tls-san:
  - cx.customer-x.com
write-kubeconfig-mode: "0644"
etcd-expose-metrics: true
cni:
  - canal

EOF
  1. Install RKE2 RKE2 Control-plane Deployment on all the remaining control plane nodes.

4. Deploy Worker Nodes

On each worker node,

  1. Run the following command to install RKE2 agent on the worker.

    BASH
    curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -
  2. Enable the rke2-agent service by using the following command.

    BASH
    systemctl enable rke2-agent.service
  3. Create a directory by running the following commands.

    BASH
    mkdir -p /etc/rancher/rke2/
  4. Add/edit /etc/rancher/rke2/config.yaml and update the following fields.

    1. <Control-Plane-IP> This is the IP for the control-plane node.

    2. <Control-Plane-TOKEN> This is the token which can be extracted from first control-plane by running cat /var/lib/rancher/rke2/server/node-token

      BASH
      server: https://<Control-Plane-IP>:9345
      token: <Control-Plane-TOKEN>
      tls-san:
        - <FQDN>
      write-kubeconfig-mode: \"0644\"
      etcd-expose-metrics: true
  5. Start the service by using follow command.

    BASH
    systemctl start rke2-agent.service

At this point, you have RKE2 Kubernetes cluster ready using load balancer.

5. Verify the cluster setup

On the control-plane node run the following command to verify that the worker(s) have been added.

CODE
kubectl get nodes -o wide

Next Steps

  1. Choose and install a cloud native storage. See Storage Solution - Getting Started for choosing the right storage solution for your deployment.

  2. Deploy Expertflow CX following Expertflow CX Deployment on Kubernetes





JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.