Monitoring Solution Deployment
Requirements
vCPU | vRAM | vDisk (GiB) | Comments |
---|---|---|---|
2 | 4 | 150 | Dedicated Node is recommended for the monitoring solution. |
This document covers the process of deploying the Monitoring solution stack for CIM. This stack consists of the following components.
Prometheus
Grafana
Alertmanager
Node Exporter
We will be using Prometheus Operator to simplify the deployment of the configuration of the monitoring stack on native Kubernetes.
The Prometheus operator includes the following features:
Kubernetes Custom Resources: Use Kubernetes custom resources to deploy and manage Prometheus, Alertmanager, and related components.
Simplified Deployment Configuration: Configure the fundamentals of Prometheus like versions, persistence, retention policies, and replicas from a native Kubernetes resource.
Prometheus Target Configuration: Automatically generate monitoring target configurations based on familiar Kubernetes label queries; no need to learn a Prometheus-specific configuration language.
Pre-requisites for a Monitoring solution
1) Metrics Server is a required component; stats are incomplete unless the metrics server is deployed and running.
2) Installation of monitoring solution requires Helm.
Installation of monitoring solution requires Helm and Metrics Server
Clone repository from Gitlab
git clone -b <release_branch> https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/cim-solution.git
you should insert the name of the release branch name here <release_branch> i.e. CIM-1.0-Beta-SR13
Creating the monitoring namespace
kubectl create namespace monitoring
Deploying Monitoring Solution
Navigate inside the repository that has been cloned earlier and go inside the monitoring
directory.
cd cim-solution/kubernetes/monitoring
Single-node cluster
Run the following command to alter the FQDN with your desired one.
replace devops247.ef.com with the name of your Master Node FQDN or VIP.
sed -i 's/devops.ef.com/<FQDN>/g' kube-prometheus-stack/values-small.yaml
For K3s
sed -i 's/devops.ef.com/<FQDN>/g' Ingress/traefik/ef-grafana-monitroing-Ingress.yaml
For RKE
sed -i 's/devops.ef.com/<FQDN>/g' Ingress/nginx/ef-grafana-monitroing-Ingress.yaml
Helm installation Command
run the following helm command to install the monitoring solution.
helm upgrade --namespace monitoring --install=true kube-stack-prometheus --values=kube-prometheus-stack/values-small.yaml kube-prometheus-stack
Multi-node cluster
Run the following command to alter the FQDN with your desired one.
replace devops247.ef.com with the name of your Master Node FQDN or VIP.
sed -i 's/devops.ef.com/devops207.ef.com/g' kube-prometheus-stack/values-large.yaml Ingress/nginx/ef-grafana-monitroing-Ingress.yaml
run the following helm command to install the monitoring solution.
helm upgrade --namespace monitoring --install=true kube-stack-prometheus --values=kube-prometheus-stack/values-large.yaml kube-prometheus-stack
For K3s
apply the ingress to access Grafana with your FQDN
kubectl apply -f Ingress/traefik/ef-grafana-monitroing-Ingress.yaml
For RKE
apply the ingress to access Grafana with your FQDN
kubectl apply -f Ingress/nginx/ef-grafana-monitroing-Ingress.yaml
verify if all the resources needed to be deployed are running.
kubectl get -n monitoring crds
kubectl get -n monitoring servicemonitor
kubectl get pods -n monitoring
Once you have verified, the deployment is completed.
Grafana will be accessible at https://{FQDN}/monitoring
The username and password for logging into Grafana is:
Username: admin
Password: Expertflow123#
Rancher Monitoring adjustments for Grafana
If the Rancher monitoring solution is deployed, please use the below given snippet to enable ingress access for Grafana
grafana.ini:
adminPassword: Expertflow123#
auth:
disable_login_form: false
auth.anonymous:
enabled: true
org_role: Viewer
auth.basic:
enabled: false
dashboards:
default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json
security:
allow_embedding: true
users:
auto_assign_org_role: Viewer
server:
domain: devops67.ef.com
#root_url: "%(protocol)s://%(domain)s/"
root_url: https://devops67.ef.com/monitoring/
serve_from_sub_path: true
Below given ingress route should work for
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: rancher-monitoring-grafana
namespace: cattle-monitoring-system
spec:
rules:
- host: <FQDN>
http:
paths:
- backend:
service:
name: rancher-monitoring-grafana
port:
number: 80
path: /monitoring
pathType: Prefix
The grafana can be accessed over https://FQDN/monitoring.
Grafana Password Problem
Sometimes, the password is not picked by the grafana pod which can be modified by executing the command inside the grafana pod
kubectl -n cattle-monitoring-system exec -ti rancher-monitoring-grafana-0 -- grafana-cli admin reset-admin-password <mynewpassword>