Skip to main content
Skip table of contents

High Availability with DNS

The purpose of this document is to describe the steps to deploy an RKE2 Kubernetes distribution in high availability with DNS.

Prerequisites

Node Required

vCPU

vRAM

vDisk (GiB)

Comments

RKE2

3 Control Plane nodes

2

4

50

See RKE2 installation requirements for hardware sizing, the underlying operating system, and the networking requirements.

CX-Core

2 Worker nodes

2

4

250

If cloud-native storage is not available, then 2 additional worker nodes are required 1 on site-A and 1 on site-B.

Superset

1 Worker node

2

8

 250

For reporting

Preparing for Deployment

All control-plane nodes must be ready as per the environment preparation mentioned in RKE2 Control plane Deployment | Environment-Preparation.

Installation and Configuration Steps

1. Setup DNS Configurations

For DNS-based load balancing you need to set up a virtual FQDN that can point to all control plane nodes. Contact your network administrator to do that.

  • The DNS server should perform health checks on the Control-Plane nodes' availability on ports 6443, 9345, 80, and 443. Otherwise routing to control-plane nodes will have to be managed manually.

Step 2. Create the first Control Plane node

Follow RKE2 Control plane Deployment to create the first control-plane node.

Get the server node token from the first control plane. This is required for adding the remaining control plane and worker nodes.

BASH
cat /var/lib/rancher/rke2/server/node-token

Step 3. Adding Remaining Control Plane Nodes

Before proceeding, make sure your control plane environment is ready following RKE2 Control plane Deployment | Environment-Preparation

  1. Create the directories as listed below in the control plane nodes to be added.

    BASH
    mkdir -p /etc/rancher/rke2/
    mkdir -p  /var/lib/rancher/rke2/server/manifests/
  2. Create a deployment manifest called config.yaml and replace <FQDN> with the FQDN/IP of the first control plane. Kid entry of tls-san can have FQDN and IP addresses of control-plane nodes and worker nodes as well.

    BASH
    cat<<EOF|tee /etc/rancher/rke2/config.yaml
    server: https://<FQDN>:9345
    token: [token from /var/lib/rancher/rke2/server/node-token on server node 1]
    write-kubeconfig-mode: "0644" 
    tls-san:
      - <FQDN>
    etcd-expose-metrics: true
    # Make a etcd snapshot every 6 hours
    etcd-snapshot-schedule-cron: "0 */6 * * *"
    # Keep 56 etcd snapshorts (equals to 2 weeks with 6 a day)
    etcd-snapshot-retention: 56
    cni:
      - canal
    
    EOF
  1. Ingress-Nginx config for RKE2 - By default the RKE-2-based ingress controller does not allow additional snippet information in ingress manifests, create this config before starting the deployment of RKE2.

    BASH
    cat<<EOF| tee /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChartConfig
    metadata:
      name: rke2-ingress-nginx
      namespace: kube-system
    spec:
      valuesContent: |-
        controller:
          metrics:
            service:
              annotations:
                prometheus.io/scrape: "true"
                prometheus.io/port: "10254"
          config:
            use-forwarded-headers: "true"
          allowSnippetAnnotations: "true"
    EOF

Step 4. Install RKE2 HA with DNS

  1. Begin the RKE2  Deployment. Starting the Service will take approx. 10-15 minutes based on the network connection

    BASH
    curl -sfL https://get.rke2.io |INSTALL_RKE2_TYPE=server  sh - 
  2. Start the RKE2 service

    BASH
    systemctl start rke2-server
  3. Enable the RKE2 Service

    BASH
    systemctl enable rke2-server
  4. By default, RKE2 deploys all the binaries in/var/lib/rancher/rke2/bin  path,  add this path to the system's default PATH for kubectl utility to work appropriately

    BASH
    export PATH=$PATH:/var/lib/rancher/rke2/bin
    export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
  5. Append these lines to the current user's .bashrc  file.

    BASH
    echo "export PATH=$PATH:/var/lib/rancher/rke2/bin" >> $HOME/.bashrc
    echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml"  >> $HOME/.bashrc 

Step 5. Deploy Worker Nodes

Follow the Deployment Prerequisites from RKE2 Control plane Deployment for each worker node before deployment i.e disable firewall on all worker nodes.

On each worker node, make sure a unique hostname is set beforehand.

To set a hostname, run the following command:-

CODE
hostnamectl set-hostname <hostname>

To check the hostname, run the following command:-

CODE
hostname

Once hostnames are set, follow these commands on each node to deploy worker nodes.

  1. Run the following command to install the RKE2 agent on the worker.

    CODE
    curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -
  2. Enable the rke2-agent service by using the following command.

    CODE
    systemctl enable rke2-agent.service
  3. Create a directory by running the following commands.

    CODE
    mkdir -p /etc/rancher/rke2/
  4. Add/edit /etc/rancher/rke2/config.yaml and update the following fields.

    1. <Control-Plane-IP> This is the IP for the first control-plane node.

    2. <Control-Plane-TOKEN> This is the token from Step 2.

      CODE
      server: https://<Control-Plane-IP>:9345
      token: <Control-Plane-TOKEN>
  5. Start the service by using the following command.

    CODE
    systemctl start rke2-agent.service

Next Steps

  1. Choose storage - See Storage Solution - Getting Started

  2. CX-Core deployment on Kubernetes

.



JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.