Breadcrumbs

K3S Multi-Node Installation ( Without HA )

Purpose

The purpose of this document is to describe the additional system requirements and steps to deploy the Multi-Node K3s Kubernetes Distribution.

Pre-requisites for Multi-Node Installation

The prerequisites and pre-deployment phases are describe in the K3s Pre-Deployment & Installation Guide. Please complete the steps before proceeding with Multi-Node Installation.


Quick Links



    Installation Steps

    Environment Customization Steps


    Click here to see customization steps.....

    Below given options can also be used for customized environment setup:

    Option

    Switch

    Default

    Description

    Default Deployment Directory of K3s

    --data-dir value, -d value

    /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root

    Folder to hold state

    Default POD IP Assignment Range

    --cluster-cidr value

    "10.42.0.0/16"

    IPv4/IPv6 network CIDRs to use for pod IPs

    Default Service IP Assignment Range

    --service-cidr value

    "10.43.0.0/16"

    IPv4/IPv6 network CIDRs to use for service IPs

    Default local storage path for local provisioner storage class( only if you are using local-provisioner )

    --default-local-storage-path value

    /var/lib/rancher/k3s/data

    default local-provisioner data path. suitable only for local-provisioner volumes

    If any of the above option is required, add it in the next step.

    cluster-cidr and service-cidr are independently evaluated. Decide wisely well before the the cluster deployment. This option is not configurable once the cluster is deployed and workload is running.



    Step 1: For Master Node

    Run the below command on the master node.

    Bash
    curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.24.7+k3s1 INSTALL_K3S_EXEC="--disable=traefik,local-storage" sh - 
    

    K3s will be installed on the master node.

    Step 2: For Worker Node

    1. If you want to add a worker node, run the following command on the master node and copy the content of the token. We will need this when we deploy k3s on a worker node.

    Bash
    cat /var/lib/rancher/k3s/server/node-token
    

    2. It will display the node-token something like this:

    Bash
    K10dd8477f0efffb9c6fe785705c63f926abcf2fedace0d40633f320038a01f6236::server:50c67be1f1fb2a42c8098d750e9f603f
    

    3. After getting the node-token, run the following command on the worker node where k3s is needed to be deployed. Please make sure to update the following fields:

    1. <MASTER-IP>

    2. <MASTER-NODE-TOKEN>

    run this command on all the worker nodes.

    Bash
    curl -sfL https://get.k3s.io | K3S_URL=https://<MASTER-IP>:6443 INSTALL_K3S_VERSION=v1.24.7+k3s1 K3S_TOKEN=<MASTER-NODE-TOKEN> sh -
    
    

    Step 3: 

    Once the installation is finished, create a directory and a config file in that directory by running the following commands. (The following command to be run on the worker node(s).

    Bash
    mkdir /root/.kube
    cd /root/.kube
    vi config
    
    
    

    Step 4:

    1. Copy the content of the k3s master node of the following file and paste it into the above-created config file.

    (Following command is to be run on k3s master node)

    Bash
    cat /etc/rancher/k3s/k3s.yaml
    


    This command will give the output something like this.


    Bash
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTkRV
    d05EWXdNemN3SGhjTk1qSXdNakUyTWpFeE16VTNXaGNOTXpJd01qRTBNakV4TXpVMwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTkRVd05EWXdNemN3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJC
    d05DQUFUckRPMnJYdjVjNWEwdmZ2RGJ4Y3prd0Y1dGFUWWplb1hSbmpycmFDL2IKZSs4KzdFLzVrL0dscHdYWUJOTEJHM0J4L2FGb1JqeDRFWlVQa3h0OHhhd1VvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVF
    IL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUtyaEZZVGV1aG50eVFKeDBiTzQ4ClkyK20yOWN3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQVBKQWkzcC9ndWxOV1pNRHd3bTB3bVhJV2txZGZQa3oKR29HMjVLNWowdm8rQWlFQXhvYz
    A3SG5CS0ZkQnc5WWNSa1E5WUE1K2pweWExWVlZUWRpcXFDYkVYMjQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
        server: https://127.0.0.1:6443
      name: default
    contexts:
    - context:
        cluster: default
        user: default
      name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
    - name: default
      user:
        client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRlZ0F3SUJBZ0lJRmU5U1IzRnN5NjB3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTF
    VRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOalExTURRMk1ETTNNQjRYRFRJeU1ESXhOakl4TVRNMU4xb1hEVEl6TURJeApOakl4TVRNMU4xb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVn
    RPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJCeDBMSU1uQzYrWjVmWGsKc3BsVTQrZytMS2Jv
    VUFIQ2xoWHZpZDN3WnpBS2hjb3FaaGxrQ2o1VlBrZHkxRFF4U1BiYUtkYVdISnltNE1WQgpKZEpyUjNHalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQ
    mdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCVGlwVm9qU09WbEJpakVURHhnV3BHeTdqUUNkekFLQmdncWhrak9QUVFEQWdOSkFEQkcKQWlFQXh6UVJaNm5vVCtYVVZxL2JTdUR1Tktr
    TEpsUXZ1bzVOdll1QWRmSHRPRjRDSVFEakVhandjazhEcEhYTgoyUGJuNXdsb25mQjJGV0Y4enJFLytHbk45Sys1ckE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkV
    HSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZGpDQ0FSMmdBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwClpXNTBMV05oUURFMk5EVXdORF
    l3TXpjd0hoY05Nakl3TWpFMk1qRXhNelUzV2hjTk16SXdNakUwTWpFeE16VTMKV2pBak1TRXdId1lEVlFRRERCaHJNM010WTJ4cFpXNTBMV05oUURFMk5EVXdORFl3TXpjd1dUQVRC
    Z2NxaGtqTwpQUUlCQmdncWhrak9QUU1CQndOQ0FBVGlzSzk5L3ZiTUh2bnlsL0JlNEhyR1UxVW96Q2gzNlpmam1hWmR0TVY5CmdZZStMRllaMkRoN09qQ2doMHR4N0N5bXZHMWdJN2V
    yTUFzRzliMHIxZWoxbzBJd1FEQU9CZ05WSFE4QkFmOEUKQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVU0cVZhSTBqbFpRWW94RXc4WUZxUgpzdTQwQW
    5jd0NnWUlLb1pJemowRUF3SURSd0F3UkFJZ2R1UGVZZCtrcENIWDhIZG9rVlcxemo3cnZ6VFQvblhuCkVlVkZ6NzZiUE5NQ0lFQnJjaHdkNDJGbmJ4UFd5WUVQN0dJOTM4U2ZVK3JLV
    nQzWXh5QTlTK0NVCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    
    
    
    

    2. Change the server

    Bash
    server: https://127.0.0.1:6443
    
    to
    
    server:https://<ip-of-master-node>:6443
    

    3. And paste it in the /root/.kube/config file on a worker node and then run the following command on the worker node(s):

    Bash
    systemctl status  k3s-agent.service
    systemctl restart  k3s-agent.service
    kubectl get nodes 
    

    Step 5: Bash Completion for kubectl

    1. Install bash-completion package

    Bash
    yum install bash-completion -y
    

    2. Set up autocomplete in bash into the current shell, bash-completion  package should be installed first.

    Bash
    source <(kubectl completion bash) 
    echo "source <(kubectl completion bash)" >> ~/.bashrc 
    

    3. also add alias for short notation of kubectl

    Bash
    echo "alias k=kubectl"  >> ~/.bashrc 
    echo "complete -o default -F __start_kubectl k"  >> ~/.bashrc 
    

    4. and source your ~/.bashrc  

    Bash
    source ~/.bashrc
    

    Step 6: Install helm

    Helm is a super tool to deploy external components. In order to install Helm on cluster, execute the following commands:

    1. Add this command in bashrc file:

    Bash
    echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.bashrc
    

    2. Run this in the command prompt:

    Bash
    source ~/.bashrc
    


    3. Install the Helm Package Manager by running the following command. It is needed for the deployment of external components:

    Bash
    curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3|bash
    

    4. Add the Helm command to your PATH:

    Bash
    export PATH=/usr/local/bin:$PATH
    


    Installing Longhorn for replicated storage

    Longhorn deployment is available at Longhorn Deployment Guide.

    Step 7: Install Traefik

    1. Clone the CIM Repo:

    Bash
    git clone -b <branch-name> https://deployment:tmJsQC-3CxVdUiKUVoA7@gitlab.expertflow.com/cim/cim-solution.git
    

    2. Replace the branch name with the actual release. Since we have disabled Traefik on installation, we need to install it now. Change the directory to the Helm Charts:

    Bash
    cd cim-solution/kubernetes
    


    3. Install the Traefik Helm Chart:

    Bash
    helm upgrade --install=true --debug --wait=true --timeout=15m0s --namespace=traefik-ingress --create-namespace --values=external/traefik/values.yaml traefik  external/traefik
    

    Step 8: CIM Deployment on Kubernetes

    Please follow the steps in the document, Expertflow CX Deployment on Kubernetes to deploy Expertflow CX Solution.