Skip to main content
Skip table of contents

RKE2 Control plane Deployment

This guide covers prerequisites and steps to install RKE2 on the control plane of Expertflow CX.

Checklist

RKE2 supported OS list is subject to frequent updates, to verify whether your OS is supported visit this link RKE2 supported OS

Environment Preparation

For Ubuntu based deployments

  1. Disable the firewall

    BASH
    systemctl disable firewalld --now
    systemctl disable apparmor.service
    systemctl disable ufw --now
    reboot
  2. update the release to latest revision (Optional)

    BASH
    apt update
    apt upgrade -y

For RHEL

  1. Disable the firewall

    BASH
    systemctl disable firewalld --now
    systemctl disable nm-cloud-setup.service nm-cloud-setup.timer
    systemctl disable apparmor.service
    reboot
  2. Lock the release to the supported version of RHEL

    BASH
    # The following command set assumes that the supported RHEL version is 8.5
    subscription-manager release --set=8.5; 
    yum clean all;
    subscription-manager release --show;
    rm -rf /var/cache/dnf
  3. Update the RHEL packages for supported release (Optional)

    BASH
    yum update -y

Set Hostname

Hostname must be set before moving forward.

To set a hostname, run the following command:-

If you are deploying a multi-node cluster, each node should have a unique hostname.

BASH
hostnamectl set-hostname <hostname>

To check the hostname, run the following command:-

BASH
hostname

Installation Steps

This step is required for the Nginx Ingress controller to allow customized configurations.

Step 1. Create Manifests

  1. Create necessary directories for RKE2 deployment

BASH
mkdir -p /etc/rancher/rke2/
mkdir -p  /var/lib/rancher/rke2/server/manifests/
  1. Generate the ingress-nginx controller config file so that the RKE2 server bootstraps it accordingly.

BASH
cat<<EOF| tee /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      extraInitContainers:
        - name: ef-set-sysctl
          image: busybox
          securityContext:
            privileged: true
          command:
          - sh
          - -c
          - |
            sysctl -w net.core.somaxconn=65535
            sysctl -w net.ipv4.ip_local_port_range="1024 65535"
      metrics:
        #  DO not enable at the cluster install, enable when monitoring is deployed.
        enabled: false   
        service:
          annotations:
            prometheus.io/scrape: "true"
            prometheus.io/port: "10254"
        serviceMonitor:
           #  DO not enable at the cluster install, enable when monitoring is deployed.
           enabled: false  
      config:
        use-forwarded-headers: "true"
        keep-alive-requests: "10000"
        upstream-keepalive-requests: "1000"
        worker-processes: "auto"
        max-worker-connections: "65535"
        use-gzip: "true"
      allowSnippetAnnotations: "true"
EOF
  1. Create a deployment manifest called

config.yaml for RKE2 Cluster and replace the IP addresses and corresponding FQDNS according.( add any other fields from the Extra Options sections in config.yaml at this point ). Kid entry of tls-san can have FQDN and IP addresses of control-plane nodes and worker nodes as well. If you deploying worker HA, uncomment to disable rke2 ingress.

BASH
cat<<EOF|tee /etc/rancher/rke2/config.yaml
#Uncomment for Control-Plane HA    tls-san and its kid entry <FQDN>
#tls-san:
#  - <FQDN>
write-kubeconfig-mode: "0644"
etcd-expose-metrics: true
etcd-snapshot-schedule-cron: "0 */6 * * *"
# Keep 56 etcd snapshorts (equals to 2 weeks with 6 a day)
etcd-snapshot-retention: 56
cni:
  - canal
#Uncomment for Worker HA Deployment
#disable: 
#  - rke2-ingress-nginx
#Uncoment the following to retain logs for any component without integrating with ELK stack
#kubelet-arg:                               
#  - "container-log-max-files=5"            
#  - "container-log-max-size=10Mi"
  
  
EOF

In above mentioned template manifest,

  • <FQDN> must be pointing towards the first control plane

Step 2. Download the RKE2 binaries and start Installation

Following are some defaults that RKE2 uses while installing RKE2. You may change the following defaults as needed by specifying the switches mentioned.

Switch

Default

Description

To change the default deployment directory of RKE2

--data-dir value, -d value

/var/lib/rancher/rke2 or ${HOME}/.rancher/rke2 if not root

Important Note: Moving the default destination folder to another location is not recommended. However, if there is need for storing the containers in different partition, it is recommended to deploy the containerd separately and change its destination to  the partition where you have space available using --root  flag in containerd.server manifest, and subsequently adding #container-runtime-endpoint: "/path/to/containerd.sock" switch in RKE2 config.yaml file. 

Default POD IP Assignment Range

--cluster-cidr value

"10.42.0.0/16"

IPv4/IPv6 network CIDRs to use for pod IPs

Default Service IP Assignment Range

--service-cidr value

"10.43.0.0/16"

IPv4/IPv6 network CIDRs to use for service IPs

cluster-cidr and service-cidr are independently evaluated. Decide wisely well before the the cluster deployment. This option is not configurable once the cluster is deployed and workload is running.

  1. Run the following command to install RKE2.

BASH
curl -sfL https://get.rke2.io |INSTALL_RKE2_TYPE=server  sh - 
  1. Enable the rke2-server service

BASH
systemctl enable rke2-server.service
  1. Start the service

BASH
systemctl start rke2-server.service

RKE2 server requires 10-15 minutes (at least) to bootstrap completely  You can check the status of the RKE2 Server using systemctl status rke2-server;  Only procced once everything is up and running or configurational issues might occur requiring redo of all the installation steps.

Step 3. Kubectl Profile setup

By default, RKE2 deploys all the binaries in

/var/lib/rancher/rke2/bin  path.  Add this path to the system's default PATH for kubectl utility to work appropriately.

BASH
echo "export PATH=$PATH:/var/lib/rancher/rke2/bin" >> $HOME/.bashrc
echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml"  >> $HOME/.bashrc
source ~/.bashrc

Step 4. Bash Completion for kubectl

  1. Install bash-completion package

For Ubuntu:-
BASH
apt install bash-completion -y   
For RHEL:-
BASH
yum install bash-completion -y
  1. Set-up autocomplete in bash into the current shell, Also, add alias for short notation of kubectl

BASH
kubectl completion bash > /etc/bash_completion.d/kubectl
echo "alias k=kubectl"  >> ~/.bashrc 
echo "complete -o default -F __start_kubectl k"  >> ~/.bashrc 
source ~/.bashrc

Step 5. Install helm

  1. Helm is a super tool to deploy external components. In order to install helm on cluster, execute the following command: 

BASH
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3|bash

In case the above mentioned command does not work, follow the below mentioned commands:-

For Ubuntu:-
BASH
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
apt-get update
apt-get install helm
For RHEL:-
BASH
curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm
chmod +x /usr/local/bin/helm
helm version

Step 6. Enable bash completion for helm

  1. Generate the scripts for help bash completion

BASH
helm completion bash > /etc/bash_completion.d/helm

create link for crictl to work properly.

BASH
ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.