Breadcrumbs

K3s Deployment in High Availability

Purpose

The purpose of this document is to describe the additional system requirements and steps to deploy the K3s Kubernetes Distribution in High Availability.

Pre-requisites 

The prerequisites and pre-deployment phases are describe in the K3s Pre-Deployment & Installation Guide. Please complete the steps before proceeding with installation in High Availability mode.


Quick Links




    Error rendering macro 'inc-drawio' : Page loading failed



    Installation Steps

    Environment Customization Steps


    Click here to see customization steps.....

    Below given options can also be used for customized environment setup:

    Option

    Switch

    Default

    Description

    Default Deployment Directory of K3s

    --data-dir value, -d value

    /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root

    Folder to hold state

    Default POD IP Assignment Range

    --cluster-cidr value

    "10.42.0.0/16"

    IPv4/IPv6 network CIDRs to use for pod IPs

    Default Service IP Assignment Range

    --service-cidr value

    "10.43.0.0/16"

    IPv4/IPv6 network CIDRs to use for service IPs

    Default local storage path for local provisioner storage class( only if you are using local-provisioner )

    --default-local-storage-path value

    /var/lib/rancher/k3s/data

    default local-provisioner data path. suitable only for local-provisioner volumes

    If any of the above option is required, add it in the next step.

    cluster-cidr and service-cidr are independently evaluated. Decide wisely well before the the cluster deployment. This option is not configurable once the cluster is deployed and workload is running.



    Step 1: Install Control Plane Nodes

    1. Run the following command on the master node:

    Bash
    mkdir -p /var/lib/rancher/k3s/server/manifests/
    

    2. Run the following command on the master node:

    Bash
    curl https://kube-vip.io/manifests/rbac.yaml > /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml
    

    Step 2: Start K3s Deployment

    Run the following command on the master node:

    Bash
    curl -sfL https://get.k3s.io |  INSTALL_K3S_VERSION=v1.24.7+k3s1  sh -s - server --disable traefik,local-storage --tls-san devops242.ef.com --cluster-init
    

    This command will take sometime based on the network speed and deployment options.

    Step 3: Get the Details for Kube-VIP

    1. Run the following command on the master node:

    For the information regarding the interface, run the following command: 

    ip a s

    and set the VIP and the Interface to be associated with it.

    NOTES:

    If the VIP FQDN translates different  internally and externally, use the internal IP here that can be migrated between all Control-Plane hosts.


    Bash
    export VIP=<IP of the FQDN>    # see above information in NOTES
    export INTERFACE=enp1s0
    


    All Control-Plane Nodes must match this interface name, otherwise the failover will not work . If the interface name is eth0, for-example, then it must be eth0 on all the Control-Plane boxes.

    2. Run the following command on the master node.

    Bash
    crictl pull ghcr.io/kube-vip/kube-vip:v0.4.4
    

    3. Run the following command on the master node.

    Bash
    alias kube-vip="ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:v0.4.4 vip /kube-vip"
    

    4. Run the following command on the master node.

    Bash
    kube-vip manifest daemonset \
        --arp \
        --interface $INTERFACE \
        --address $VIP \
        --controlplane \
        --leaderElection \
        --taint \
        --services \
        --inCluster | tee /var/lib/rancher/k3s/server/manifests/kube-vip.yaml
    
    
    

    5. Run the following command on the master node.

    Bash
    kubectl get ds -n kube-system
    

    6. Up until now, the work on the one master node has been completed. Now, we need to attach the other master (or worker) nodes with our current master node:

    For that purpose, copy the token of the k3s master node with this command:

    Bash
    sudo cat /var/lib/rancher/k3s/server/node-token
    

    The token looks something like this:

    Bash
    K10e10d4ff7ce60c0d84531a65a24838ff8cc8da75af9a5aa3a8947aac28762772f::server:e96b8f4fd8b2817b80f93f1b92e4bc97
    


    Step 4: Create Commands for joining other Master and Worker nodes

    After getting the token, run the following command on the master node where k3s is needed to be deployed.  Please make sure to update the following fields:

    1. <VIRTUAL-IP>

    2. <TOKEN>

    3. <fQDN OF THE MASTER>

    Master Nodes:

    For rest of the Master nodes run this command on the first master node to create the join command . The extra options options must be added before proceeding with it ( from  Extra Option – Customize the K3s Deployment for your Environment section)

    Bash
    echo -e "\n\nRun the Below Command on All Other Control-Plane Nodes\n\ncurl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.24.7+k3s1    K3S_TOKEN=$(cat /var/lib/rancher/k3s/server/node-token) sh -s server --server https://${VIP}:6443 --disable traefik,local-storage --tls-san $VIP\n\n"
    

    and run the resulting command on all other to-be master nodes.

    If you have more than one master node, keep on running the above command on all the master nodes.

    Worker node:

    Run this command on the first master to create a worker joining command

    Bash
    echo -e "\n\nRun This command on all the WORKER Nodes\n\ncurl -sfL https://get.k3s.io | K3S_URL=https://${VIP}:6443 INSTALL_K3S_VERSION=v1.24.7+k3s1    K3S_TOKEN=$(cat /var/lib/rancher/k3s/server/node-token) sh -\n\n"
    

    and run on all the worker nodes.

    If you have more than one worker node, keep running the above command on all the worker nodes.

    Step 5: Install Helm

    Helm is a super tool to deploy external components. In order to install Helm on cluster, execute the following commands:

    You may install helm on only one of any master nodes.

    1. Add this command in bashrc file:

    Bash
    echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.bashrc
    

    2. Run this in the command prompt:

    Bash
    source ~/.bashrc
    


    3. Install the Helm Package Manager by running the following command. It is needed for the deployment of external components:

    Bash
    curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3|bash
    

    4. Add the Helm command to your PATH:

    Bash
    export PATH=/usr/local/bin:$PATH
    


    Step 6: Bash Completion for kubectl

    1. Install bash-completion package

    Bash
    yum install bash-completion -y
    

    2. Set up the autocomplete in Bash into the current shell; bash-completion package should be installed first.

    Bash
    source <(kubectl completion bash) 
    echo "source <(kubectl completion bash)" >> ~/.bashrc 
    

    3. Also, add the alias for the short notation of kubectl

    Bash
    echo "alias k=kubectl"  >> ~/.bashrc 
    echo "complete -o default -F __start_kubectl k"  >> ~/.bashrc 
    

    4. and source your ~/.bashrc  

    Bash
    source ~/.bashrc
    


    Installing Longhorn for replicated storage

    Longhorn deployment is available at Longhorn Deployment Guide

    Step 7: Install Traefik

    1. Clone the CIM Repo:

    Bash
    git clone -b <branch-name> https://deployment:tmJsQC-3CxVdUiKUVoA7@gitlab.expertflow.com/cim/cim-solution.git
    

    2. Replace the branch name with the actual release. Since we have disabled Traefik on installation, we need to install it now. Change the directory to the Helm Charts:

    Bash
    cd cim-solution/kubernetes
    


    3. Install the Traefik Helm Chart:

    Bash
    helm upgrade --install=true --debug --wait=true --timeout=15m0s --namespace=traefik-ingress --create-namespace --values=external/traefik/values.yaml traefik  external/traefik
    


    Step 8: CIM Deployment on Kubernetes

    Please follow the steps in the document, Expertflow CX Deployment on Kubernetes to deploy Expertflow CX Solution.