I have an application designed to run as a K8s application, and it imports some dependencies (that I don't own) that run exec.Cmd
s. This is fine, except I want to capture those logs. For some reason, when I do:
r := bufio.NewReader(os.Stdout)
...
line, err := r.ReadString('\n')
An error is thrown saying that /dev/stdout
is a bad file descriptor
. How can this be? Isn't that the standard local destination for console output?
kubectl logs
seems to be able to capture the output, and more specifically, our central log forwarder is able to capture it as well. But trying to capture logs from the kube API server inside the container that's actually generating those logs seems kinda silly... Is there a better way to do this?
CodePudding user response:
You should consider multi container pod with main/app container and log container as sidecar container reading the logs from main/app container. conside r the example from the below link showing how to tail logs from main container
https://learnk8s.io/sidecar-containers-patterns
CodePudding user response:
Generally, stdin
is a read-only stream for retrieving input written to your program, while stdout
is a write-only stream for sending output written by your program. In other words, nobody can read from /dev/stdout, except Chuck Norris.
By default, stdout
is "pointing" to your terminal. But it is possible to redirect stdout
from your terminal to a file. This redirection is set up before your program is started.
What usually happens, is the following: The container runtime redirects stdout
of the process of your container to a file on the node where your container is running (e.g., /var/log/containers/<container-name>-<container-id>.log
). When you request logs with kubectl logs
, kubectl connects to kube-apiserver, which connects to the kubelet on the node running your container and asks it to send back the content from the log file.
Also take a look at https://kubernetes.io/docs/concepts/cluster-administration/logging/ which explains the various logging design approaches.
A solution, which from a security and portability perspective you would definitely NOT implement, is to add a hostPath
mount in your container mounting the /var/log/containers
directory of your node and to access the container log directly.
A proper solution might be to change the command of your image and to write output to stdout
of your container and also to a local file within your container. This can be achieved using the tee
command. Your application can then read back the log from this file. But keep in mind, that without proper rotation, the log file will grow until your container is terminated.
apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
containers:
- image: bash:latest
name: log-to-stdout-and-file
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /tmp/test.log'
A little more complex solution would be, to replace the log file in the container with a named pipe file created with mkfifo
. This avoids the growing file size problem (as long as your application is continuously reading the log from the named pipe file).
apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
# the init container creates the fifo in an empty dir mount
initContainers:
- image: bash:latest
name: create-fifo
command:
- bash
- -c
- mkfifo /var/log/myapp/log
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# the actual app uses tee to write the log to stdout and to the fifo
containers:
- image: bash:latest
name: log-to-stdout-and-fifo
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /var/log/myapp/log'
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# this sidecar container is only for testing purposes, it reads the
# content written to the fifo (this is usually done by the app itself)
#- image: bash:latest
# name: log-reader
# command:
# - bash
# - -c
# - cat /var/log/myapp/log
# volumeMounts:
# - name: ed
# mountPath: /var/log/myapp
volumes:
- name: ed
emptyDir: {}