Home > Back-end >  Mount Pod Logs to Desired Volume directory
Mount Pod Logs to Desired Volume directory

Time:11-11

I am trying to mount my Pod logs directory from /var/log/pods to a local node volume /var/data10.

Deployment file:

apiVersion: apps/v1
kind: Deployment
metadata:       
   name: nginx-counter
   namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:      
      app: my-nginx
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
       nodeSelector:
         kubernetes.io/hostname: kworker3
       containers:
       - name: count
         image: busybox
         args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i 1)); sleep 1; done']
         ports:
         - containerPort: 80
         volumeMounts:
           - name: dirvol
             mountPath: "/var/log/containers"
             readOnly: true
         env:
           - name: POD_NAMESPACE
             valueFrom:
               fieldRef:
                 fieldPath: metadata.namespace
           - name: POD_ID
             valueFrom:
               fieldRef:
                 fieldPath: metadata.uid
           - name: POD_NAME
             valueFrom:
               fieldRef:
                 apiVersion: v1
                 fieldPath: metadata.name
       volumes:
           - name: dirvol
             persistentVolumeClaim:
                     claimName: nginx-pvc

PV PVC file:

---
kind: PersistentVolume
apiVersion: v1
metadata:
    name: nginx-pv
    namespace: default
spec:
  storageClassName: nginx-sc
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
      path: "/var/data10"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: nginx-pvc
    namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
     requests:
         storage: 50Gi
  storageClassName: nginx-sc
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: nginx-sc
    namespace: default
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---

Terminal Window:

us@kworker3:~$ cd /var/log/pods/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b/
us@kworker3:/var/log/pods/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b$ cd count/
us@kworker3:/var/log/pods/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b/count$ ls
0.log
us@kworker3:/var/log/pods/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b/count$ cd
us@kworker3:~$ 
us@kworker3:~$ 
us@kworker3:~$ 
us@kworker3:~$ cd /var/data10
us@kworker3:/var/data10$ cd default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b/
us@kworker3:/var/data10/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b$ ls
us@kworker3:/var/data10/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b$ ls

I am trying to take the log file 0.log and place it in the persistent volume /var/data10 But as you can see it is empty.

I know I could have used a logging agent like fluentd to grab my container logs but I am trying to use this way to get my logs.

Please note that I am trying to apply this scenario for a real web application, kubernetes pods would normally throw logs to the /var/log/containers directory on the node and my goal is to mount the log file (of the container) to the hostDisk (/var/data10) so that when the pod is deleted I would still have the logs inside my volume.

CodePudding user response:

Symbolic link does not work in hostPath. Use tee to make a copy in the pod echo ... | tee /pathInContainer/app.log which in turn mount to /var/data10 hostPath volume. If tee is not ideal, your best bet is running a log agent as sidecar.

Note that your PV hostPath.path: "/var/data10" will not contain any data as your stdout does not save here. You mounted this hostPath in the container as "/var/log/containers" will serve no purpose.

  • Related