Skip to main content
Skip table of contents

RKE2 Control plane Deployment

This guide covers prerequisites and steps to install RKE2 for a single node setup of Expertflow CX.

Checklist

RKE2 supported OS list is subject to frequent updates, to verify whether your OS is supported visit this link RKE2 supported OS

  • Is Internet access available on the target node? If not available, check RKE2 Air-Gapped install of RKE2 for offline deployment.
  • The node is running RKE2 supported OS. RKE2 installation requirements 
  • FQDN should be mapped to the underlying node IP.
  • NTP should be enabled for all nodes and connected to the same NTP server.
  • FirewallD and nm-cloud-setup must be disabled.
  • IP Range must not co-exist with already existing IP Range for PODs and Service CIDRs.
  • Virtual IP - IP from the same subnet of control plane nodes is needed for VIP failover (required for multi control plane only)
  • If your environment restricts external connectivity through an HTTP proxy, follow Configure an HTTP proxy to configure your proxy as per RKE2 recommendations.Environment Preparation

For Ubuntu based deployments

  1. Disable the firewall

    BASH
    systemctl disable firewalld --now
    systemctl disable apparmor.service
    systemctl disable ufw --now
    reboot

For RHEL

  1. Disable the firewall

    BASH
    systemctl disable firewalld --now
    systemctl disable nm-cloud-setup.service nm-cloud-setup.timer
    systemctl disable apparmor.service
    reboot
  2. Lock the release to the supported version of RHEL

    BASH
    # The following command set assumes that the supported RHEL version is 8.5
    subscription-manager release --set=8.5; 
    yum clean all;
    subscription-manager release --show;
    rm -rf /var/cache/dnf
  3. Update the RHEL packages for supported release

    BASH
    yum update -y

Installation Steps

This step is required for the Nginx Ingress controller to allow customized configurations.

Step 1. Create Manifests

  1. Create necessary directories for RKE2 deployment

BASH
mkdir -p /etc/rancher/rke2/
mkdir -p  /var/lib/rancher/rke2/server/manifests/
  1. Generate the ingress-nginx controller config file so that the RKE2 server bootstraps it accordingly.

BASH
cat<<EOF| tee /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      metrics:
        service:
          annotations:
            prometheus.io/scrape: "true"
            prometheus.io/port: "10254"
      config:
        use-forwarded-headers: "true"
      allowSnippetAnnotations: "true"
EOF
  1. Create deployment manifest called

config.yaml for RKE2 Cluster and replace the IP addresses and corresponding FQDNS according.( add any other fields from the Extra Options sections in config.yaml  at this point ). If you deploying worker HA, uncomment to disable rke2 ingress.

CODE
cat<<EOF|tee /etc/rancher/rke2/config.yaml
#Uncomment for Control-Plane HA    tls-san and its kid entry <FQDN>
#tls-san:
#  - <FQDN>
write-kubeconfig-mode: "0644"
etcd-expose-metrics: true
etcd-snapshot-schedule-cron: "0 */6 * * *"
# Keep 56 etcd snapshorts (equals to 2 weeks with 6 a day)
etcd-snapshot-retention: 56
cni:
  - canal
#Uncomment for Worker HA Deployment
#disable: 
#  - rke2-ingress-nginx
  
  
EOF

In above mentioned template manifest,

  • <FQDN> must be pointing towards the first control plane

Step 2. Download the RKE2 binaries and start Installation

Following are some defaults that RKE2 uses while installing RKE2. You may change the following defaults as needed by specifying the switches mentioned.

Switch

Default

Description

To change the default deployment directory of RKE2

--data-dir value, -d value

/var/lib/rancher/rke2 or ${HOME}/.rancher/rke2 if not root

Important Note: Moving the default destination folder to another location is not recommended. However, if there is need for storing the containers in different partition, it is recommended to deploy the containerd separately and change its destination to  the partition where you have space available using --root  flag in containerd.server manifest, and subsequently adding #container-runtime-endpoint: "/path/to/containerd.sock" switch in RKE2 config.yaml file. 

Default POD IP Assignment Range

--cluster-cidr value

"10.42.0.0/16"

IPv4/IPv6 network CIDRs to use for pod IPs

Default Service IP Assignment Range

--service-cidr value

"10.43.0.0/16"

IPv4/IPv6 network CIDRs to use for service IPs

cluster-cidr and service-cidr are independently evaluated. Decide wisely well before the the cluster deployment. This option is not configurable once the cluster is deployed and workload is running.

  1. Run the following command to install RKE2.

BASH
curl -sfL https://get.rke2.io |INSTALL_RKE2_TYPE=server  sh - 
  1. Enable the rke2-server service

BASH
systemctl enable rke2-server.service
  1. Start the service

BASH
systemctl start rke2-server.service

RKE2 server requires 10-15 minutes (at least) to bootstrap completely  You can check the status of the RKE2 Server using systemctl status rke2-server;  Only procced once everything is up and running or configurational issues might occur requiring redo of all the installation steps.

Step 3. Kubectl Profile setup

By default, RKE2 deploys all the binaries in

/var/lib/rancher/rke2/bin  path.  Add this path to the system's default PATH for kubectl utility to work appropriately.

BASH
echo "export PATH=$PATH:/var/lib/rancher/rke2/bin" >> $HOME/.bashrc
echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml"  >> $HOME/.bashrc
source ~/.bashrc

Step 4. Bash Completion for kubectl

  1. Install bash-completion package

BASH
apt install bash-completion -y   
  1. Set-up autocomplete in bash into the current shell, Also, add alias for short notation of kubectl

BASH
kubectl completion bash > /etc/bash_completion.d/kubectl
echo "alias k=kubectl"  >> ~/.bashrc 
echo "complete -o default -F __start_kubectl k"  >> ~/.bashrc 
source ~/.bashrc

Step 5. Install helm

  1. Helm is a super tool to deploy external components. In order to install helm on cluster, execute the following command: 

BASH
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3|bash

Step 6. Enable bash completion for helm

  1. Generate the scripts for help bash completion

BASH
helm completion bash > /etc/bash_completion.d/helm

create link for crictl to work properly.

CODE
ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.