315

I have a Python (2.7) app which is started in my dockerfile:

CMD ["python","main.py"]

main.py prints some strings when it is started and goes into a loop afterwards:

print "App started"
while True:
    time.sleep(1)

As long as I start the container with the -it flag, everything works as expected:

$ docker run --name=myapp -it myappimage
> App started

And I can see the same output via logs later:

$ docker logs myapp
> App started

If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:

$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)

But the container still seems to run;

$ docker ps
Container Status ...
myapp     up 4 minutes ... 

Attach does not display anything either:

$ docker attach --sig-proxy=false myapp
(working, no output)

Any ideas whats going wrong? Does "print" behave differently when ran in background?

Docker version:

Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
Chris Martin
  • 30,334
  • 10
  • 78
  • 137
jpdus
  • 8,959
  • 3
  • 15
  • 14

13 Answers13

561

Finally I found a solution to see Python output when running daemonized in Docker, thanks to @ahmetalpbalkan over at GitHub. Answering it here myself for further reference :

Using unbuffered output with

CMD ["python","-u","main.py"]

instead of

CMD ["python","main.py"]

solves the problem; you can see the output now (both, stderr and stdout) via

docker logs myapp

why -u ref

- print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up
- executing the same script with python -u gives instant output as said above
- import logging + logging.warning("text") gives the expected result even without -u

what it means by python -u ref. > python --help | grep -- -u

-u     : force the stdout and stderr streams to be unbuffered;
Nam G VU
  • 33,193
  • 69
  • 233
  • 372
jpdus
  • 8,959
  • 3
  • 15
  • 14
  • 7
    -u seems to work for me, but is there some documentation somewhere with a description of what it actually does? – Little geek Dec 06 '16 at 10:56
  • 17
    As suggested by other answers, you can try setting environment variable `ENV PYTHONUNBUFFERED=0` in case the `-u` flag does not work. – Farshid T Jan 12 '17 at 11:32
  • 2
    This was my problem too. For a more detailed explanation, see http://stackoverflow.com/a/24183941/562883 – Jonathan Stray Apr 15 '17 at 21:27
  • Some more about `-u` here: https://stackoverflow.com/questions/107705/disable-output-buffering – cardamom Mar 20 '18 at 13:24
  • 3
    Works like a dream on python3, while setting PYTHONUNBUFFERED=0 wasnt helping. – Lech Migdal Dec 21 '18 at 20:20
  • Be aware `PYTHONUNBUFFERED=0` is no silver bullet. It comes with a performance penalty. – Rotareti Nov 21 '19 at 21:29
  • This worked for me using conda. What was weird was if the script failed, I would get the exception printed to the terminal. If the script ran fine, I got no output from my prints. This fixed that! – Jesse H. Jun 09 '21 at 13:59
  • 2
    Thanks This helped us to start with debugging on prod – Yash Gupta Jun 10 '22 at 07:59
155

In my case, running Python with -u didn't change anything. What did the trick, however, was to set PYTHONUNBUFFERED=1 as environment variable:

docker run --name=myapp -e PYTHONUNBUFFERED=1 -d myappimage

[Edit]: Updated PYTHONUNBUFFERED=0 to PYTHONUNBUFFERED=1 after Lars's comment. This doesn't change the behavior and adds clarity.

Victor
  • 3,081
  • 2
  • 18
  • 20
  • 7
    In my case, adding `-e PYTHONUNBUFFERED=0` helps. – David Ng Dec 08 '15 at 09:50
  • 1
    Thank you! I was banging my head off a wall for hours, and couldn't get logs to work even with `-u`. Your solution fixed it for me on Docker for Mac with Django – Someguy123 Sep 21 '16 at 19:34
  • 2
    i think this is a better solution, that we don't have to rebuild the docker image to see the outputs – FF0605 Oct 29 '18 at 22:28
  • 5
    This is great thanks. Its worth mentioning that this just needs to be a non empty character to work according to the docs [PYTHONUNBUFFERED](https://docs.python.org/2/using/cmdline.html#envvar-PYTHONUNBUFFERED) – A Star Nov 19 '18 at 21:23
  • Worked for docker-compose interface. Would have never guessed – deepelement Dec 30 '18 at 01:30
  • 19
    `PYTHONUNBUFFERED=0` is misleading b/c it suggests that unbuffering is disabled. Instead it's enabled b/c python looks for a _non-empty_ string. That said, better use `PYTHONUNBUFFERED=1` which has the same effect but doesn't lead to wrong assumptions. – Lars Blumberg Mar 03 '21 at 16:21
  • The value of `0` didn't work on Python 3.8. I had to use `PYTHONUNBUFFERED=1` to make it work. – Ali Tou Apr 08 '21 at 10:32
41

If you want to add your print output to your Flask output when running docker-compose up, add the following to your docker compose file.

web:
  environment:
    - PYTHONUNBUFFERED=1

https://docs.docker.com/compose/environment-variables/

Rich Hildebrand
  • 1,607
  • 17
  • 15
38

See this article which explain detail reason for the behavior:

There are typically three modes for buffering:

  • If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block).
  • If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn’t flushed until it fills up.
  • If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \n is seen, and then all of the data that buffered is flushed at that point in time. In reality there’s typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like “buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first”.

And GNU libc (glibc) uses the following rules for buffering:

Stream               Type          Behavior
stdin                input         line-buffered
stdout (TTY)         output        line-buffered
stdout (not a TTY)   output        fully-buffered
stderr               output        unbuffered

So, if use -t, from docker document, it will allocate a pseudo-tty, then stdout becomes line-buffered, thus docker run --name=myapp -it myappimage could see the one-line output.

And, if just use -d, no tty was allocated, then, stdout is fully-buffered, one line App started surely not able to flush the buffer.

Then, use -dt to make stdout line buffered or add -u in python to flush the buffer is the way to fix it.

atline
  • 28,355
  • 16
  • 77
  • 113
21

Since I haven't seen this answer yet:

You can also flush stdout after you print to it:

import time

if __name__ == '__main__':
    while True:
        print('cleaner is up', flush=True)
        time.sleep(5)
tycl
  • 338
  • 2
  • 4
20

Try to add these two environment variables to your solution PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8

Lukasz Dynowski
  • 11,169
  • 9
  • 81
  • 124
8

You can see logs on detached image if you change print to logging.

main.py:

import time
import logging
print "App started"
logging.warning("Log app started")
while True:
    time.sleep(1)

Dockerfile:

FROM python:2.7-stretch
ADD . /app
WORKDIR /app
CMD ["python","main.py"]
The Hog
  • 889
  • 10
  • 26
  • 1
    nice. tip: use Python 3. – adhg Sep 06 '19 at 15:05
  • question is in Python 2 (print statement without parenthesis) therefore am using 2 here. Although it is exactly the same behaviour on Python3.6 so thanks for a tip ;) – The Hog Sep 07 '19 at 07:43
7

If anybody is running the python application with conda you should add --no-capture-output to the command since conda buffers to stdout by default.

ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "my-app", "python", "main.py"]
Elinoter99
  • 599
  • 1
  • 7
  • 22
5

I had to use PYTHONUNBUFFERED=1 in my docker-compose.yml file to see the output from django runserver.

Raddish IoW
  • 302
  • 4
  • 12
4

As a quick fix, try this:

from __future__ import print_function
# some code
print("App started", file=sys.stderr)

This works for me when I encounter the same problems. But, to be honest, I don't know why does this error happen.

Vitaly Isaev
  • 5,392
  • 6
  • 45
  • 64
  • Thanks for the tip! Tried replacing all prints with your version, unfortunately it did not work for me, still can't get any output via docker logs (changing between sys.stderr / sys.stdout does has no visible result). Is this a docker bug? – jpdus Apr 16 '15 at 11:07
  • See [my answer](https://stackoverflow.com/a/57801848/6394722), the reason is: stderr was unbuffered, so you can make it fix with your solution. – atline Sep 05 '19 at 09:02
4

If you aren't using docker-compose and just normal docker instead, you can add this to your Dockerfile that is hosting a flask app

ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
    PYTHONUNBUFFERED="true"

CMD [ "flask", "run" ]
G Wayne
  • 51
  • 2
4

When using python manage.py runserver for a Django application, adding environment variable PYTHONUNBUFFERED=1 solve my problem. print('helloworld', flush=True) also works for me.

However, python -u doesn't work for me.

Gulessness
  • 41
  • 3
1

Usually, we redirect it to a specific file (by mounting a volume from host and writing it to that file).

Adding a tty using -t is also fine. You need to pick it up in docker logs.

Using large log outputs, I did not have any issue with buffer storing all without putting it in dockers log.

Edward Aung
  • 3,014
  • 1
  • 12
  • 15