K3s Deployment in High Availability
Installation Steps
Environment Customization Steps
Step 1: Install Control Plane Nodes
1. Run the following command on the master node:
mkdir -p /var/lib/rancher/k3s/server/manifests/
2. Run the following command on the master node:
curl https://kube-vip.io/manifests/rbac.yaml > /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml
Step 2: Start K3s Deployment
Run the following command on the master node:
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.24.7+k3s1 sh -s - server --disable traefik,local-storage --tls-san devops242.ef.com --cluster-init
This command will take sometime based on the network speed and deployment options.
Step 3: Get the Details for Kube-VIP
1. Run the following command on the master node:
For the information regarding the interface, run the following command:
ip a s
and set the VIP and the Interface to be associated with it.
NOTES:
If the VIP FQDN translates different internally and externally, use the internal IP here that can be migrated between all Control-Plane hosts.
export VIP=<IP of the FQDN> # see above information in NOTES
export INTERFACE=enp1s0
All Control-Plane Nodes must match this interface name, otherwise the failover will not work . If the interface name is eth0, for-example, then it must be eth0 on all the Control-Plane boxes.
2. Run the following command on the master node.
crictl pull ghcr.io/kube-vip/kube-vip:v0.4.4
3. Run the following command on the master node.
alias kube-vip="ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:v0.4.4 vip /kube-vip"
4. Run the following command on the master node.
kube-vip manifest daemonset \
--arp \
--interface $INTERFACE \
--address $VIP \
--controlplane \
--leaderElection \
--taint \
--services \
--inCluster | tee /var/lib/rancher/k3s/server/manifests/kube-vip.yaml
5. Run the following command on the master node.
kubectl get ds -n kube-system
6. Up until now, the work on the one master node has been completed. Now, we need to attach the other master (or worker) nodes with our current master node:
For that purpose, copy the token of the k3s master node with this command:
sudo cat /var/lib/rancher/k3s/server/node-token
The token looks something like this:
K10e10d4ff7ce60c0d84531a65a24838ff8cc8da75af9a5aa3a8947aac28762772f::server:e96b8f4fd8b2817b80f93f1b92e4bc97
Step 4: Create Commands for joining other Master and Worker nodes
After getting the token, run the following command on the master node where k3s is needed to be deployed. Please make sure to update the following fields:
<VIRTUAL-IP>
<TOKEN>
<fQDN OF THE MASTER>
Master Nodes:
For rest of the Master nodes run this command on the first master node to create the join command . The extra options options must be added before proceeding with it ( from Extra Option – Customize the K3s Deployment for your Environment section)
echo -e "\n\nRun the Below Command on All Other Control-Plane Nodes\n\ncurl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.24.7+k3s1 K3S_TOKEN=$(cat /var/lib/rancher/k3s/server/node-token) sh -s server --server https://${VIP}:6443 --disable traefik,local-storage --tls-san $VIP\n\n"
and run the resulting command on all other to-be master nodes.
If you have more than one master node, keep on running the above command on all the master nodes.
Worker node:
Run this command on the first master to create a worker joining command
echo -e "\n\nRun This command on all the WORKER Nodes\n\ncurl -sfL https://get.k3s.io | K3S_URL=https://${VIP}:6443 INSTALL_K3S_VERSION=v1.24.7+k3s1 K3S_TOKEN=$(cat /var/lib/rancher/k3s/server/node-token) sh -\n\n"
and run on all the worker nodes.
If you have more than one worker node, keep running the above command on all the worker nodes.
Step 5: Install Helm
Helm is a super tool to deploy external components. In order to install Helm on cluster, execute the following commands:
You may install helm on only one of any master nodes.
1. Add this command in bashrc file:
echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.bashrc
2. Run this in the command prompt:
source ~/.bashrc
3. Install the Helm Package Manager by running the following command. It is needed for the deployment of external components:
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3|bash
4. Add the Helm command to your PATH:
export PATH=/usr/local/bin:$PATH
Step 6: Bash Completion for kubectl
1. Install bash-completion package
yum install bash-completion -y
2. Set up the autocomplete in Bash into the current shell; bash-completion
package should be installed first.
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
3. Also, add the alias for the short notation of kubectl
echo "alias k=kubectl" >> ~/.bashrc
echo "complete -o default -F __start_kubectl k" >> ~/.bashrc
4. and source your ~/.bashrc
source ~/.bashrc
Installing Longhorn for replicated storage
Longhorn deployment is available at Longhorn Deployment Guide
Step 7: Install Traefik
1. Clone the CIM Repo:
git clone -b <branch-name> https://deployment:tmJsQC-3CxVdUiKUVoA7@gitlab.expertflow.com/cim/cim-solution.git
2. Replace the branch name with the actual release. Since we have disabled Traefik on installation, we need to install it now. Change the directory to the Helm Charts:
cd cim-solution/kubernetes
3. Install the Traefik Helm Chart:
helm upgrade --install=true --debug --wait=true --timeout=15m0s --namespace=traefik-ingress --create-namespace --values=external/traefik/values.yaml traefik external/traefik
Step 8: CIM Deployment on Kubernetes
Please follow the steps in the document, Expertflow CX Deployment on Kubernetes to deploy Expertflow CX Solution.