22

When I run

$ kubectl logs <container>

I get the logs of my pods.

But where are the files for those logs?

Some sources says /var/log/containers/ others says /var/lib/docker/containers/ but I couldn't find my actual application's or pod's log.

d4nyll
  • 11,811
  • 6
  • 54
  • 68
gcstr
  • 1,466
  • 1
  • 21
  • 45

6 Answers6

35

Short Answer:

If you're using Docker, the stdout from each container are stored in /var/lib/docker/containers. But Kubernetes also creates a directory structure to help you find logs based on Pods, so you can find the container logs for each Pod running on a node at /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/.


Longer Answer:

Docker traps the stdout logs from each container and stores them in /var/lib/docker/containers on the host. If Kubernetes uses Docker as the container runtime, Docker will also store the containers logs in that location on the Kubernetes node. But since we don't run containers directly in Kubernetes (we run Pods), Kubernetes also creates the /var/log/pods/ and /var/log/containers directories to help us better organize the log files based on Pods.

Each directory within /var/log/pods/ stores the logs for a single Pod, and each are named using the structure <namespace>_<pod_name>_<pod_id>.

You can get the ID of a Pod by running kubectl get pod -n core gloo-76dffbd956-rmvdz -o jsonpath='{.metadata.uid}'. If you're used to using yq, you may find running kubectl get pod <pod_name> -o yaml | yq r - metadata.uid more straight-forward.

Within each /var/log/pods/<namespace>_<pod_name>_<pod_id>/ directory are more directories, each representing a container within the Pod. The name of these directories is equal to the name of the container. Lastly, when we look inside a /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/ directory, we'll find symbolic links to the log files stored by Docker inside /var/lib/docker/containers.

Similarly, inside the /var/log/containers/ directory are symlinks to a /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/ directory. These symlinks are named using the structure <pod_name>_<namespace>_<container_id>.

d4nyll
  • 11,811
  • 6
  • 54
  • 68
  • 1
    What tool on Kubernetes side manages the creation of /var/log/pods and /var/log/containers, and symlinks as containers are created/deleted? – Kevin Burke Aug 25 '22 at 18:46
7

Do you see anything in those directories?

In my clusters, the stdout/stderr logs from each pod are in /var/log/containers, however there is some linking/redirection:

/var/log/containers/<pod-name>_<namespace>_<container-name-container-id>.log -> /var/log/pods/<some-uuid>/<container-name>_0.log

And that log is actually linked into /var/lib/docker:

<container-name>_0.log -> /var/lib/docker/containers/<container-id>/<container-id>-json.log
erk
  • 461
  • 2
  • 9
5

The on-disk filename comes from

docker inspect $pod_name_or_sha | jq -r '.[0].LogPath'

assuming the docker daemon's configuration is the default {"log-driver": "json-file"}, which is almost guaranteed to be true if kubectl logs behaves correctly.

This may also go without saying, but you must be on the Node upon which the Pod was scheduled for either docker inspect, or sniffing around for the presence of log files on disk, to do anything helpful. kubectl describe pod $pod_name will render the Node name, or as you might suspect it'll be in kubectl get -o json pod $pod_name if you wish to acquire it programmatically.

mdaniel
  • 31,240
  • 5
  • 55
  • 58
  • Hey Matthew. Thanks a lot for your answer! But in my inspect, `LogPath` is empty. Do you know if I'm missing some config? How can I set the `LogPath` properly? – gcstr Dec 21 '17 at 18:24
  • That's super weird; can you see what the logging config is for that container? `docker inspect pod-or-sha | jq '.[0].HostConfig.LogConfig'`; separately, it might even be interesting to know if that one differs from the rest, so: `docker inspect $(docker ps -aq) | jq '.[].HostConfig.LogConfig'` – mdaniel Dec 21 '17 at 21:04
  • `.[0].HostConfig.LogConfig` is set to `{"Type": "journald","Config": {}}` – gcstr Dec 21 '17 at 21:28
  • Ah, that'll do it. Fancy, I had no idea that kubelet was smart enough to get logs from anywhere but on disk. Anyway, in which case the answer to your question is that the **files** are (likely) in `/var/log/journal/$(cat /etc/machine-id)` but that's just the pedantic answer since those files are binary; the log content is accessible via `journalctl -u ${container_unit}` but I have no experience to know what the `${container_unit}` would be. `systemctl list-units` should provide the container unit names. – mdaniel Dec 21 '17 at 21:46
  • Hi Matthew. I managed to make it work by changing the docker's log driver from journal to json. Now I can see my application log files. Thank you very much for your support! – gcstr Dec 22 '17 at 17:40
5

It depends on k8s version:

  • before 1.14: /var/log/pods/<pod_id>/<name>/<num>.log
  • 1.14 or later: /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/<num>.log(see this PR)

the files above both symbol links to docker files, e.g. /var/lib/docker/containers/<container-id>/<container-id>-json.log.

For example:

enter image description here

NiYanchun
  • 697
  • 8
  • 11
0

inside my pods there's no log directory inside var/... instead there are directories named spool www etc..where do I find the logs...as i can get them after running command "kubectl logs".

  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Dec 14 '22 at 11:33
  • This does not really answer the question. If you have a different question, you can ask it by clicking [Ask Question](https://stackoverflow.com/questions/ask). To get notified when this question gets new answers, you can [follow this question](https://meta.stackexchange.com/q/345661). Once you have enough [reputation](https://stackoverflow.com/help/whats-reputation), you can also [add a bounty](https://stackoverflow.com/help/privileges/set-bounties) to draw more attention to this question. - [From Review](/review/late-answers/33379841) – Koedlt Dec 14 '22 at 21:22
-1

Logs are managed by the kubelet on each node. When you run kubectl logs <pod>, it passes the request to the kubelet on the node where your pod is running, and reads the associated logfile.

You can see the architecture here

  • Neither your answer nor your link doesn't say much if anything about the file paths or their filenames. – philraj Jun 25 '20 at 20:31
  • logs are not managed by the kubelet rather the json driver(which is the default) reads the stdout stream and dumps it to a location which is going to be picked up by agents. In case of kubectl logs , the kubelet reads the logs from that dumped location and displays it on the terminal. – PhiberOptixz Sep 14 '21 at 12:27