2

Is it possible to get the following check_call procedure:

 logPath="log.txt"
 with open(logPath,"w") as log:
       subprocess.check_call(command, stdout = log, stderr=subprocess.STDOUT )

to output the stdout and stderr to a file continously?

On my machine, the output is written to the file only after the subprocess.check_call finished.

To achieve this, perhaps we could modify the buffer length of the log filestream?

Michael Currie
  • 13,721
  • 9
  • 42
  • 58
Gabriel
  • 8,990
  • 6
  • 57
  • 101
  • First thing: `open(logPath, 'w', 0)` for no buffering on the parent side. However, the client process may still do its own buffering which you may our may not be able to influence. I.e. if `command` buffers its output and you cannot switch the command to not buffer, you will get data in chunks. – dhke Jul 09 '15 at 14:17

1 Answers1

2

Not without some OS tricks.

That happens because the output usually is line-buffered (i.e. after a newline character, the buffer is flushed) when the output is a terminal, but it is block-buffered when the output is a file or pipe, so in the block-buffering case, you won't see the output written "continuously", but rather it will be written every 1k or 4k or whatever the block size it is.

This is the default behavior of libc, so if the subprocess is written in C and using printf()/fprintf(), it will check the output if it is a terminal or a file and change the buffering mode accordingly.

The concept of buffering is (better) explained at http://www.gnu.org/software/libc/manual/html_node/Buffering-Concepts.html

This is done for performance reasons (see the answer to this question).

If you can modify subprocess' code, you can put a call to flush() after each line or when needed.

Otherwise there are external tools to force line buffering mode (by tricking programs into believing the output is a terminal):

Possibly related:

Community
  • 1
  • 1
fferri
  • 18,285
  • 5
  • 46
  • 95