Deployment of Solution Monitoring Components
This document covers the process of deploying the Monitoring solution stack for CIM. This stack consists of the following components.
- Prometheus
- Grafana
- Alertmanager
- Node Exporter
We will be using Prometheus Operator to simplify the deployment of the configuration of the monitoring stack on native Kubernetes.
The Prometheus operator includes the following features:
Kubernetes Custom Resources: Use Kubernetes custom resources to deploy and manage Prometheus, Alertmanager, and related components.
Simplified Deployment Configuration: Configure the fundamentals of Prometheus like versions, persistence, retention policies, and replicas from a native Kubernetes resource.
Prometheus Target Configuration: Automatically generate monitoring target configurations based on familiar Kubernetes label queries; no need to learn a Prometheus-specific configuration language.
Pre-requisites for a Monitoring solution
1) Metrics Server is a required component; stats are incomplete unless the metrics server is deployed and running.
2) Installation of monitoring solution requires Helm.
Installation of monitoring solution requires Helm and Metrics Server
Clone repository from Gitlab
git clone -b <release_branch> https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/cim-solution.git
you should insert the name of the release branch name here <release_branch> i.e. CIM-1.0-Beta-SR13
Creating the monitoring namespace
kubectl create namespace monitoring
Deploying Monitoring Solution
Navigate inside the repository that has been cloned earlier and go inside the monitoring
directory.
cd cim-solution/kubernetes/monitoring
For K3s
apply the ingress to access Grafana with your FQDN
kubectl apply -f Ingress/traefik/ef-grafana-monitroing-Ingress.yaml
For RKE
apply the ingress to access Grafana with your FQDN
kubectl apply -f Ingress/nginx/ef-grafana-monitroing-Ingress.yaml
verify if all the resources needed to be deployed are running.
kubectl get -n monitoring crds
kubectl get -n monitoring servicemonitor
kubectl get pods -n monitoring
Once you have verified, the deployment is completed.
Grafana will be accessible at: https://{FQDN}/monitoring
The username and password for logging into Grafana is:
Username: admin
Password: Expertflow123#
Rancher Monitoring adjustments for Grafana
If Rancher monitoring solution is deployed, please use below given snippet to enable ingress access for Grafana
grafana.ini:
adminPassword: Expertflow123#
auth:
disable_login_form: false
auth.anonymous:
enabled: true
org_role: Viewer
auth.basic:
enabled: false
dashboards:
default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json
security:
allow_embedding: true
users:
auto_assign_org_role: Viewer
server:
domain: devops67.ef.com
#root_url: "%(protocol)s://%(domain)s/"
root_url: https://devops67.ef.com/monitoring/
serve_from_sub_path: true
Below given ingress route should work for
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: rancher-monitoring-grafana
namespace: cattle-monitoring-system
spec:
rules:
- host: <FQDN>
http:
paths:
- backend:
service:
name: rancher-monitoring-grafana
port:
number: 80
path: /monitoring
pathType: Prefix
The grafana can be accessed over https://FQDN/monitoring.
Grafana Password Problem
Sometimes, the password is not picked by the grafana pod which can be modified by executing the command inside the grafana pod
kubectl -n cattle-monitoring-system exec -ti rancher-monitoring-grafana-0 -- grafana-cli admin reset-admin-password <mynewpassword>