Skip to main content
Skip table of contents

Developer Instructions for maintaining the Kubernetes Manifests



This document describes the methods of creating and maintaining Kubernetes manifests in a simpler and easy to user format.


Image Tag Update


In order to update the image tag for a particular component, please edit the corresponding Deployment manifest in cim-solution/kubernetes/cim/Deployments folder and change the image tag to the required value.


CODE
[root@devops230 Deployments]# more ef-reporting-connector-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    ef.service: ef-reporting-connector
    ef: expertflow
  name: ef-reporting-connector
  namespace: expertflow

spec:
  replicas: 1
  selector:
    matchLabels:
      ef.service: ef-reporting-connector
  template:
    metadata:
      labels:
        ef.service: ef-reporting-connector
        ef: expertflow
    spec:
      imagePullSecrets:
      - name: expertflow-reg-cred
      containers:
        - image: gitlab.expertflow.com:9242/cim/reporting-connector/build:1.2
          name: ef-reporting-connector
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: config-volume-conf
              mountPath: /root/config/reporting-connector.conf
              subPath: reporting-connector.conf
            - name: config-volume-cron
              mountPath: /etc/crontabs/root
              subPath: root
      volumes:
         - name: config-volume-conf
           configMap:
             name: ef-reporting-connector-conf
             optional: true
         - name: config-volume-cron
           configMap:
             name: ef-reporting-connector-cron
             items:
               - key: reporting-connector-cron
                 path: root

      restartPolicy: Always






To Apply the changed information, please delete the previous deployment using

CODE
kubectl delete -f Deployments/ef-<component>-deployment.yaml


and the re-apply the manifest using


CODE
kubectl apply -f Deployments/ef-<component>-deployment.yaml


Config-Maps


Config-Maps are meant to provide variable data which must be kept external to the container image like Container Environment and configuration files.

Currently in CIM , we are using 2 different types of Config-Maps

Environment Config-Maps

These are used to configure the Environment variables in key: value format and are then extended to the container process via the config-maps. The container process sees this list as environment variables similar to the way we use in our SHELL using 'export VAR=VAL' command

In order to create the environment based Config-Maps, please follow these steps



In this example we are going to add  a variable called NEW_VARIABLES with the value of "true" in component 360-Connector's  config-Map with the name of 'ef-connector360-cm'


change path 

CODE
cd cim-solution/kubernetes/cim/ConfigMaps


Edit the ConfigMap as per the existing lines like


CODE
apiVersion: v1
data:
  CCM_URL: http://ef-ccm-svc:8081/message/receive
  CONNECT_TIMEOUT: "3000"
  FILE_ENGINE_URL: https://ef-file-engine-svc:8443/
  READ_TIMEOUT: "10000"
  NEW_VARIABLES: "true"
kind: ConfigMap
metadata:
  labels:
    ef.service: 360-connector
    ef: expertflow
  name: ef-connector360-cm
  namespace: expertflow


A new variables with the name of NEW_VARIABLES with the value of "true" has been added.


Now edit the deployment of the same CIM component for which the environment Config-Map has been changed and extend the change to deployment so that container can borrow it.


Go the the Deployments Folder 

CODE
cd ../Deployment


Edit the corresponding component's Deployment file 


CODE
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    ef.service: ef-360-connector
    ef: expertflow
  name: ef-360-connector
  namespace: expertflow

spec:
  replicas: 1
  selector:
    matchLabels:
      ef.service: ef-360-connector
  template:
    metadata:
      labels:
        ef.service: ef-360-connector
        ef: expertflow
    spec:
      imagePullSecrets:
      - name: expertflow-reg-cred
      containers:
        - env:
             - name: NEW_VARIABLES
              valueFrom:
                configMapKeyRef:
                  key: NEW_VARIABLES
                  name: ef-connector360-cm 
            - name: CCM_URL
              valueFrom:
                configMapKeyRef:
                  key: CCM_URL
                  name: ef-connector360-cm
            - name: CONNECT_TIMEOUT
              valueFrom:
                configMapKeyRef:
                  key: CONNECT_TIMEOUT
                  name: ef-connector360-cm
            - name: FILE_ENGINE_URL
              valueFrom:
                configMapKeyRef:
                  key: FILE_ENGINE_URL
                  name: ef-connector360-cm
            - name: READ_TIMEOUT
              valueFrom:
                configMapKeyRef:
                  key: READ_TIMEOUT
                  name: ef-connector360-cm
          image: gitlab.expertflow.com:9242/cim/360-connector/build:1.0-SR13_f-CIM-6938-1374e672ae6078fa7413aac858bad8fce91b5b48
          name: ef-360-connector
          imagePullPolicy: IfNotPresent
          ports:
           - containerPort: 8080
      restartPolicy: Always



Add the newly added environment Config-Map entry in the Deployment and save the file. The newly added config-map entry will always look like 

CODE
              - name: VARIABLE_NAME
              valueFrom:
                configMapKeyRef:
                  key: VARIABLE_NAME
                  name: CONFIG-MAP-NAME  

  where the VARIABLE_NAME is the variable that we introduced in our first step and the CONFIG-MAP is the name of its config-map edited in first step.


Once the Config-Map has been updated, the corresponding component's POD will have to re-created . Apply the changes by


deleting the existing config-map


CODE
kubectl delete -f ConfigMaps/<name-of-config-map>.yaml


delete the corresponding pod

CODE
kubectl delete -f Deployment/ef-<component>-deployment.yaml


Now apply the new config-map


CODE
kubectl  apply -f ConfigMap/<name-of-config-map>.yaml


and re-create the POD using

CODE
kubectl apply -f Deployment/ef-<component>-deployment.yaml


the newly added variable will be made available to the container process. You can verify this by exec'ing into the the container/POD and issuing the 'env' command on your SHELL prompt.

CODE
kubectl -n expertflow exec -ti <POD-NAME> -- sh

# env



File-System Config-Maps

This method allows us to use the configuration files ( <= 1 MB file per ConfigMap)  as file-name:file-contents pairs and then inherit these files in different locations inside the directory structure of the container. These config-maps can be used READ-ONLY mechanism to mount files.


In this example we will be mounting a file "reporting-connector.conf" as config-map to the Reporting-Connector container and then mount this file at "/root/config/reporting-connector.conf" inside the container, so that it is always available to the container as a regular file.


Create the Config-map from the file that we want to add using 

CODE
kubectl -n expertflow create configmap <config-map-name> --from-file=reporting-connector.conf



Edit the Deployment of the component in Deployments folder under cim-solution/kubernetes/cim/Deployments


CODE
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    ef.service: ef-reporting-connector
    ef: expertflow
  name: ef-reporting-connector
  namespace: expertflow

spec:
  replicas: 1
  selector:
    matchLabels:
      ef.service: ef-reporting-connector
  template:
    metadata:
      labels:
        ef.service: ef-reporting-connector
        ef: expertflow
    spec:
      imagePullSecrets:
      - name: expertflow-reg-cred
      containers:
        - image: gitlab.expertflow.com:9242/cim/reporting-connector/build:1.2
          name: ef-reporting-connector
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: config-volume-conf
              mountPath: /root/config/reporting-connector.conf
              subPath: reporting-connector.conf
            - name: config-volume-cron
              mountPath: /etc/crontabs/root
              subPath: root
      volumes:
         - name: config-volume-conf
           configMap:
             name: ef-reporting-connector-conf
             optional: true
         - name: config-volume-cron
           configMap:
             name: ef-reporting-connector-cron
             items:
               - key: reporting-connector-cron
                 path: root

      restartPolicy: Always




Here are the details how it works.


Initially we added these lines 


CODE
           volumeMounts:
            - name: config-volume-conf
              mountPath: /root/config/reporting-connector.conf
              subPath: reporting-connector.conf


This means that there is a volume attachment to the POD/container with the name of config-volume-conf  and its contents should be fetched from  the volume name config-volume-conf .

The volume now points to the actual config-map using

CODE
       volumes:
         - name: config-volume-conf
           configMap:
             name: ef-reporting-connector-conf
             optional: true 


The Volume Definition now explains that the volume contents are to be fetched from the configmap called ef-reporting-connector-conf . Once these conditions are met, the config is inheritied to the container as a file to be mounted on /root/config/reporting-connector.conf  path.


A similar approach will needed to delete both Config-Map and running POD for the information to be transferred to the Container.










JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.