Home > Mobile >  How to configure DaemonSet Fluentd to read from custom log file and console
How to configure DaemonSet Fluentd to read from custom log file and console

Time:12-02

I'm trying to configure EFK stack in my local minikube setup. I have followed this tutorial.

Everything is working fine (I can see all my console logs in kibana and Elasticsearch). But I have another requirement. I have Node.js application which is logs as files to custom path /var/log/services/dev inside the pod.

File Tree:

/var/log/services/dev/# ls -l
total 36
-rw-r--r--    1 root     root         28196 Nov 27 18:09 carts-service-dev.log.2021-11-27T18.1
-rw-r--r--    1 root     root          4483 Nov 27 18:09 carts-service-dev.log.2021-11-27T18

How to configure my Fluentd to read all my console logs and also read logs from the custom path configured?

My App Deployment File:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: carts
spec:
  replicas: 1
  selector:
    matchLabels:
      app: carts
  template:
    metadata:
      labels:
        app: carts
    spec:
      containers:
        - name: app
          image: carts-service
          resources:
            limits:
              memory: "1024Mi"
              cpu: "500m"
          ports:
            - containerPort: 4000

My Fluentd DaemonSet File:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      terminationGracePeriodSeconds: 30
      volumes:
        - name: varlog
          hostPath:
            path: /var/log
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: fluentd
          image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
          volumeMounts:
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 200Mi
          env:
            - name: FLUENT_ELASTICSEARCH_HOST
              value: "elasticsearch.default"
            - name: FLUENT_ELASTICSEARCH_PORT
              value: "9200"

I do know that log files written into custom path /var/log/services/dev will be deleted if pod crashes. So I have to use persistent volume to mount this path.

But I lack the experience of how to create persistent volume for that path and also link Fluentd to read from it.

Thanks in advance.

CodePudding user response:

If a pod crashes, all logs still will be accessible in efk. No need to add a persistent volume to the pod with your application only for storing log file.

Main question is how to get logs from this file. There are two main approaches which are suggested and based on kubernetes documentation:

  1. Use a sidecar container.

    Containers in pod have the same file system and sidecar container will be streaming logs from file to stdout and/or stderr (depends on implementation) and after logs will be picked up by kubelet.

    Please find streaming sidecar container and example how it works.

  2. Use a sidecar container with a logging agent.

    Please find Sidecar container with a logging agent and configuration example using fluentd. In this case logs will be collected by fluentd and they won't be available by kubectl logs commands since kubelet is not responsible for these logs.

  • Related