RKE2 Control-plane Deployment
This guide covers prerequisites and steps to install RKE2 for a single node setup of Expertflow CX.
Environment Preparation
For Ubuntu based deployments
Disable the firewall
BASHsystemctl disable firewalld --now systemctl disable apparmor.service sudo ufw disable reboot
For RHEL
Disable the firewall
BASHsystemctl disable firewalld --now systemctl disable nm-cloud-setup.service nm-cloud-setup.timer systemctl disable apparmor.service reboot
Lock the release to the supported version of RHEL
BASH# The following command set assumes that the supported RHEL version is 8.5 subscription-manager release --set=8.5; yum clean all; subscription-manager release --show; rm -rf /var/cache/dnf
Update the RHEL packages for supported release
BASHyum update -y
Checklist
RKE2 supported OS list is subject to frequent updates, to verify whether your OS is supported visit this link RKE2 supported OS
- Is Internet access available on the target node? If not available, check RKE2 Air-Gapped install of RKE2 for offline deployment.
- The node is running RKE2 supported OS. RKE2 installation requirements
- FQDN should be mapped to the underlying node IP.
- NTP should be enabled for all nodes and connected to the same NTP server.
- FirewallD and nm-cloud-setup must be disabled.
- IP Range must not co-exist with already existing IP Range for PODs and Service CIDRs.
- Virtual IP - IP from the same subnet of control plane nodes is needed for VIP failover (required for multi control plane only)
- If your environment restricts external connectivity through an HTTP proxy, follow Configure an HTTP proxy to configure your proxy as per RKE2 recommendations.
Installation Steps
This step is required for the Nginx Ingress controller to allow customized configurations.
Step 1. Create Manifests
Create necessary directories for RKE2 deployment
mkdir -p /etc/rancher/rke2/
mkdir -p /var/lib/rancher/rke2/server/manifests/
Generate the ingress-nginx controller config file so that the RKE2 server bootstraps it accordingly.
cat<<EOF| tee /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-ingress-nginx
namespace: kube-system
spec:
valuesContent: |-
controller:
metrics:
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
config:
use-forwarded-headers: "true"
allowSnippetAnnotations: "true"
EOF
Create deployment manifest called
config.yaml
for RKE2 Cluster and replace the IP addresses and corresponding FQDNS according.( add any other fields from the Extra Options sections in config.yaml
at this point )
cat<<EOF|tee /etc/rancher/rke2/config.yaml
tls-san:
- <FQDN>
write-kubeconfig-mode: "0600"
etcd-expose-metrics: true
cni:
- canal
EOF
In above mentioned template manifest,
<FQDN> must be pointing towards the first control plane
Step 2. Download the RKE2 binaries and start Installation
Following are some defaults that RKE2 uses while installing RKE2. You may change the following defaults as needed by specifying the switches mentioned.
Switch | Default | Description | |
---|---|---|---|
To change the default deployment directory of RKE2 |
|
| Important Note: Moving the default destination folder to another location is not recommended. However, if there is need for storing the containers in different partition, it is recommended to deploy the containerd separately and change its destination to the partition where you have space available using |
Default POD IP Assignment Range |
|
| IPv4/IPv6 network CIDRs to use for pod IPs |
Default Service IP Assignment Range |
|
| IPv4/IPv6 network CIDRs to use for service IPs |
cluster-cidr and service-cidr are independently evaluated. Decide wisely well before the the cluster deployment. This option is not configurable once the cluster is deployed and workload is running.
Run the following command to install RKE2.
curl -sfL https://get.rke2.io |INSTALL_RKE2_TYPE=server sh -
Enable the rke2-server service
systemctl enable rke2-server.service
Start the service
systemctl start rke2-server.service
RKE2 server requires 10-15 minutes (at least) to bootstrap completely You can check the status of the RKE2 Server using systemctl status rke2-server
; Only procced once everything is up and running or configurational issues might occur requiring redo of all the installation steps.
By default, RKE2 deploys all the binaries in
/var/lib/rancher/rke2/bin
path. Add this path to the system's default PATH for kubectl utility to work appropriately.
export PATH=$PATH:/var/lib/rancher/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
Also, append these lines into the current user's
.bashrc
file
echo "export PATH=$PATH:/var/lib/rancher/rke2/bin" >> $HOME/.bashrc
echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml" >> $HOME/.bashrc
and source your
~/.bashrc
source ~/.bashrc
Step 3. Bash Completion for kubectl
Install bash-completion package
yum install bash-completion -y
Set-up autocomplete in bash into the current shell,
bash-completion
package should be installed first.
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
Also, add alias for short notation of kubectl
echo "alias k=kubectl" >> ~/.bashrc
echo "complete -o default -F __start_kubectl k" >> ~/.bashrc
and source your
~/.bashrc
source ~/.bashrc
Step 4. Install helm
Add this command in ~/.bashrc file.
echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml" >> ~/.bashrc
run this in the command prompt.
source ~/.bashrc
Helm is a super tool to deploy external components. In order to install helm on cluster, execute the following command:
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3|bash
Step 5. Enable bash completion for helm
Generate the scripts for help bash completion
helm completion bash > /etc/bash_completion.d/helm
Either re-login or run this command to enable the helm bash completion instantly.
source <(helm completion bash)
Next Steps
Choose storage - See Storage Solution - Getting Started