I am trying to add a side car container to an existing pod (webapp-1) to save the logs. However, I am getting error after creating the pod. The pod is crashing and the status changes to error..
For the below question i have added the yaml file. Please let me know if this is fine.
Add a side car container to the running pod logging-pod with the blow specification
The image of the sidecar container is busybox and the container writes the logs as below
tail -n 1 /var/log/k8slog/application.log
The container shares the volume logs with the application container the mounts to the
directory /var/log/k8slog
Do not alter the application container and verify the logs are written properly to the file
here is the yaml file.. I dont understand where I am making a mistake here.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-10-25T07:54:07Z"
labels:
name: webapp-1
name: webapp-1
namespace: default
resourceVersion: "3241"
uid: 8cc29748-7879-4726-ac60-497ee41f7bd6
spec:
containers:
- image: kodekloud/event-simulator
imagePullPolicy: Always
name: simple-webapp
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/k8slog/application.log
echo "$(date) INFO $i" >>;
i=$((i 1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n 1 /var/log/k8slog/application.log']
volumeMounts:
- name: varlog
mountPath: /var/log
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-fgstk
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: controlplane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: varlog
mountPath: /var/log
- name: default-token-fgstk
secret:
defaultMode: 420
secretName: default-token-fgstk
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-10-25T07:54:07Z"
status: "True"
type: Initialized
- lastProbeTime: null
CodePudding user response:
First of all, You could create a directory and the logfile itself. If the count-log-1
container spin up first, it will have nothing to read and exit with an error. To to it, a good practise is to use an Init Container. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Second, containers need to have a shared volume, on which the logfile will be present. If there is no need to persist data, an emptyDir volume will enough. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Finally, You hade some errors in shell commands. Full .yaml
file:
apiVersion: v1
kind: Pod
metadata:
labels:
name: webapp-1
name: webapp-1
namespace: default
spec:
# Init container fo creating the log directory and file
# on the emptyDir volume, which will be passed to the containers
initContainers:
- name: create-log-file
image: busybox
command:
- sh
- -c
- |
#!/bin/sh
mkdir -p /var/log/k8slog
touch /var/log/k8slog/application.log
# Mount varlog volume to the Init container
volumeMounts:
- name: varlog
mountPath: /var/log
containers:
- image: kodekloud/event-simulator
imagePullPolicy: Always
name: simple-webapp
command:
- sh
- -c
- |
i=0
while true; do
echo "$i: $(date)" >> /var/log/k8slog/application.log
echo "$(date) INFO $i"
i=$((i 1))
sleep 1
done
# Mount varlog volume to simple-webapp container
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox
command:
- sh
- -c
- |
tail -f -n 1 /var/log/k8slog/application.log
# Mount varlog volume to count-log-1 container
volumeMounts:
- name: varlog
mountPath: /var/log
# Define na emptyDir shared volume
volumes:
- name: varlog
emptyDir: {}