Deployment of Logging Operator
Architecture
logging-operator is a logging aggregator which collects and aggregates the logs from the EF-CX solution and routes the logs to a centralized console for evaluations using Elasticsearch/Kibana ( ELK Stack ). Logging operator is deployed per kubernetes cluster and routing of the logs is performed on per namespace basis.
This guide expects that you have already deployed the ELK stack using ELK for Logs Analysis. Use appropriate credentials.
Add helm repository
helm repo add expertflow https://expertflow.github.io/charts/
update helm repo
helm repo update expertflow
helm upgrade --install --namespace=logging --create-namespace logging-operator expertflow/logging-operator
Verify that the pods are running:-
kubectl -n logging get pods
Logging-operator configuration
create fluent-bit configuration
kubectl apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: FluentbitAgent
metadata:
name: expertflow-fluentbit-agent
namespace: logging
spec:
filterKubernetes:
Kube_URL: "https://kubernetes.default.svc:443"
bufferStorage:
storage.path: /buffers
bufferStorageVolume:
hostPath:
path: ""
bufferVolumeImage: {}
inputTail:
storage.type: filesystem
positiondb:
hostPath:
path: ""
resources: {}
updateStrategy: {}
EOF
Create Fluentd configuration
kubectl apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: expertflow-fluentd-logging
namespace: logging
spec:
enableRecreateWorkloadOnImmutableFieldChange: true
controlNamespace: logging
fluentd:
scaling:
drain:
enabled: true
replicas: 1
bufferStorageVolume:
pvc:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
volumeMode: Filesystem
EOF
Repeat below given steps for all the namespaces from which logs are to be collected and sent to ELK Stack.
Update the Logging Output file
Execute the following command to apply secrets per namespace. Replace <ELASTICSEARCH_PASSWORD> with the password that was used to setup ELK for Logs Analysis
kubectl -n <namespace> create secret generic elastic-password --from-literal=password=<ELASTICSEARCH_PASSWORD>
Create logging flow for logging-operator.
Change below given placeholders with actual values before applying the manifest.
<NAMESPACE>
<CLUSTER_NAME>
kubectl apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:
name: <NAMESPACE>-flow-es
namespace: <NAMESPACE>
labels:
ef: expertflow
spec:
filters:
- record_modifier: # if you e.g. have multiple clusters
records:
- cluster: "<CLUSTER_NAME>"
- record_transformer:
enable_ruby: true
records:
- message: ${record["message"].gsub(/\e\[([;\d]+)?m/, '')}
# replaces dots in labels and annotations with dashes to avoid mapping issues (app=foo (text) vs. app.kubernetes.io/name=foo (object))
# fixes error: existing mapping for [kubernetes.labels.app] must be of type object but found [text]
- dedot:
de_dot_separator: "-"
de_dot_nested: true
localOutputRefs:
- <NAMESPACE>-output-es
EOF
Create Output for logging-operator
Change these parameters before applying the manifest.
<NAMESPACE>
<INDEX_NAME>
<ELASTICSEARCH_HOST_IP>
<PORT>
kubectl apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: Output
metadata:
name: <NAMESPACE>-output-es
namespace: <NAMEPSACE>
labels:
ef: ef-external
spec:
elasticsearch:
host: <ELASTICSEARCH_HOST_IP>
port: 30920
user: elastic
index_name: <INDEX_NAME>
password:
valueFrom:
secretKeyRef:
name: elastic-password #kubectl -n logging create secret generic elastic-password --from-literal=password=admin123
key: password
scheme: https
ssl_verify: false
#logstash_format: true # this creates its own index, so dont enable it
include_timestamp: true
reconnect_on_error: true
reload_on_failure: true
buffer:
flush_at_shutdown: true
type: file
chunk_limit_size: 4M # Determines HTTP payload size
total_limit_size: 1024MB # Max total buffer size
flush_mode: interval
flush_interval: 10s
flush_thread_count: 2 # Parallel send of logs
overflow_action: block
retry_forever: true # Never discard buffer chunks
retry_type: exponential_backoff
retry_max_interval: 60s
# enables logging of bad request reasons within the fluentd log file (in the pod /fluentd/log/out)
log_es_400_reason: true
EOF
Verify status of all logging resources created
kubectl get logging-all -A