Background: I use glog to register signal handler, but it cannot kill the init process (PID=1) with kill
sigcall. That way, even though deadly signals like SIGABRT
is raised, kubernetes controller manager won't be able to understand the pod is actually not functioning, thus kill the pod and restart a new one.
My idea is to add logic into my readiness/liveness probe: check the content for current container, whether it's in healthy state.
I'm trying to look into the logs on container's local filesystem /var/log
, but haven't found anything useful.
I'm wondering if it's possible to issue a HTTP request to somewhere, to get the complete log? I assume it's stored somewhere.
CodePudding user response:
You can find the kubernetes logs on Master machine at:
/var/log/pods
if using docker containers:
/var/lib/docker/containers
CodePudding user response:
Containers are Ephemeral
Docker containers emit logs to the stdout
and stderr
output streams. Because containers are stateless, the logs are stored on the Docker host in JSON files
by default.
The default logging driver is json-file. The logs are then annotated with the log origin, either stdout or stderr, and a timestamp. Each log file contains information about only one container.
As @Uri Loya said, You can find these JSON log files in /var/lib/docker/containers/ directory on a Linux Docker host. Here's how you can access them:
/var/lib/docker/containers/<container id>/<container id>-json.log
You can collect the logs with a log aggregator and store them in a place where they'll be available forever. It's dangerous to keep logs on the Docker host because they can build up over time and eat into your disk space. That's why you should use a central location for your logs and enable log rotation for your Docker containers.