Setup Alerts via Gmail
Pre-requisites for setting up alerts via Gmail.
The monitoring solution must be deployed before setting up the alerts. Follow this guide to deploy the monitoring solution.
Change the directory
cd cim-solution/kubernetes
Update the variables.
Update the variables in values-small.yaml
vi monitoring/kube-prometheus-stack/values-small.yaml
Update these values
additionalScrapeConfigs:
- job_name: 'blackbox'
metrics_path: /probe
params:
module: [icmp] # Use the icmp module defined in blackbox.yml
static_configs:
- targets:
- <Target IP> # Replace with the IP you want to ping
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: prometheus-blackbox-exporter:9115 # Blackbox exporter address
additionalPrometheusRulesMap:
rule-name:
groups:
- name: my_group
rules:
- alert: InstanceDown
expr: probe_success == 0
for: 30s
labels:
severity: critical
annotations:
summary: "Instance {{ $labels.instance }} is down"
description: "Instance {{ $labels.instance }} has been down for more than 30 seconds."
config:
global:
smtp_smarthost: 'smtp.gmail.com:<smtp port number>'
smtp_require_tls: false
smtp_auth_username: '<email ID>'
smtp_auth_password: '<auth token/password>'
smtp_from: 'email ID'
resolve_timeout: 5m
inhibit_rules:
- source_matchers:
- 'severity = critical'
target_matchers:
- 'severity =~ warning|info'
equal:
- 'namespace'
- 'alertname'
- source_matchers:
- 'severity = warning'
target_matchers:
- 'severity = info'
equal:
- 'namespace'
- 'alertname'
- source_matchers:
- 'alertname = InfoInhibitor'
target_matchers:
- 'severity = info'
equal:
- 'namespace'
route:
group_by: ['namespace']
group_wait: 30s
group_interval: 5m
repeat_interval: 15m
receiver: 'email'
routes:
- receiver: 'email'
matchers:
- severity=~"critical"
- receiver: 'null'
matchers:
- alertname =~ "InfoInhibitor|Watchdog"
receivers:
- name: 'null'
- name: 'email'
email_configs:
- to: '<receiver email ID>'
from: '<email ID>'
smarthost: 'smtp.gmail.com:<smtp port number>'
require_tls: false
auth_username: '<email ID>'
auth_password: '<auth token/password>'
templates:
- '/etc/alertmanager/config/*.tmpl'
grafana:
enabled: true
namespaceOverride: ""
#EXPERTFLOW
grafana.ini:
smtp:
enabled: true
host: smtp.gmail.com:<smtp port number>
user: <email ID>
password: <auth token/password>
skip_verify: true
from_address: <email ID>
from_name: Grafana
server:
domain: <FQDN>
#root_url: "%(protocol)s://%(domain)s/"
root_url: https://<FQDN>/monitoring/
Update monitoring solution.
Update monitoring solution using the following command
cd monitoring/
helm upgrade --namespace monitoring --install=true kube-stack-prometheus --values=kube-prometheus-stack/values-small.yaml kube-prometheus-stack
Pull blackbox helm chart.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm pull prometheus-community/prometheus-blackbox-exporter
Open the values.yaml file
vi prometheus-blackbox-exporter/values.yaml
Update the following values:-
config:
modules:
http_2xx:
prober: http
timeout: 5s
http:
valid_http_versions: [ "HTTP/1.1", "HTTP/2" ]
valid_status_codes: [] # Defaults to 2xx
follow_redirects: true
icmp:
prober: icmp
timeout: 5s
Deploy the blackbox exporter:-
Deploy the blackbox exporter using the following command.
helm upgrade --namespace monitoring --install=true prometheus-blackbox-exporter --values=prometheus-blackbox-exporter/values.yaml prometheus-blackbox-exporter