113

I have a small script, which is called daily by crontab using the following command:

/homedir/MyScript &> some_log.log

The problem with this method is that some_log.log is only created after MyScript finishes. I would like to flush the output of the program into the file while it's running so I could do things like

tail -f some_log.log

and keep track of the progress, etc.

Nic
  • 6,211
  • 10
  • 46
  • 69
olamundo
  • 23,991
  • 34
  • 108
  • 149
  • 1
    We'll need to have a description -or if possible code- of what your small script does exactly... – ChristopheD Sep 15 '09 at 22:37
  • 16
    To unbuffer python scripts you could use "python -u". To unbuffer perl scripts, see Greg Hewgill reply below. And so on... – Eloici May 28 '14 at 10:12
  • If you can edit the script, you can usually flush the output buffer explicitly within a script, for example in python with `sys.stdout.flush()`. – drevicko Feb 09 '17 at 09:48

13 Answers13

117

I found a solution to this here. Using the OP's example you basically run

stdbuf -oL /homedir/MyScript &> some_log.log

and then the buffer gets flushed after each line of output. I often combine this with nohup to run long jobs on a remote machine.

stdbuf -oL nohup /homedir/MyScript &> some_log.log

This way your process doesn't get cancelled when you log out.

John Kugelman
  • 349,597
  • 67
  • 533
  • 578
Martin Wiebusch
  • 1,681
  • 2
  • 11
  • 15
  • 2
    Could you add a link to some documentation for `stdbuf`? Based on [this comment](http://stackoverflow.com/questions/1429951/bash-how-to-flush-output-to-a-file-while-running?answertab=votes#comment4951700_4520994) it seems like it's not available on some distros. Could you clarify? – Nic Jun 15 '15 at 12:31
  • 1
    stdbuf -o adjusts stdout buffering. The other options are -i and -e for stdin and stderr. L sets line buffering. One could also specify buffer size, or 0 for no buffering. – Seppo Enarvi Feb 25 '16 at 12:17
  • 5
    that link is no longer available. – john-jones Oct 31 '17 at 18:45
  • 3
    @NicHartley: `stdbuf` is part of GNU coreutils, [documentation can be found at gnu.org](https://www.gnu.org/software/coreutils/manual/html_node/stdbuf-invocation.html#stdbuf-invocation) – Thor Mar 27 '19 at 06:37
  • 1
    In case it helps anyone, [use](http://stackoverflow.com/questions/35348559/ddg#35348702) `export -f my_function` and then `stdbuf -oL bash -c "my_function -args"` if you need to run a function instead of a script – LHeng Jan 01 '20 at 02:52
  • Didn't work for me on GitHub Actions. For something as simple as `apk add coreutils && stdbuf -o0 -e0 -i0 sh -xc 'set && whoami && pwd && ls && exit 1'`. With [`unbuffer`](https://stackoverflow.com/a/11337310/52499) it works. – x-yuri Aug 09 '20 at 13:53
  • Did not work for me Ubuntu 20, inside tmux at least – Bersan Oct 04 '21 at 12:38
44
script -c <PROGRAM> -f OUTPUT.txt

Key is -f. Quote from man script:

-f, --flush
     Flush output after each write.  This is nice for telecooperation: one person
     does 'mkfifo foo; script -f foo', and another can supervise real-time what is
     being done using 'cat foo'.

Run in background:

nohup script -c <PROGRAM> -f OUTPUT.txt
John Kugelman
  • 349,597
  • 67
  • 533
  • 578
user3258569
  • 441
  • 4
  • 2
  • Wow! A solution that works in `busybox`! (my shell freezes afterwards, but whatever) – Victor Sergienko Jul 06 '18 at 23:19
  • what is `-c` for? – Bersan Oct 04 '21 at 12:42
  • From `man script`: `-c`, `--command command` Run the command rather than an interactive shell. This makes it easy for a script to capture the output of a program that behaves differently when its stdout is not a tty. Another useful argument is `-q` and can be combined with `-c` like this: `-qc`. This prevents start and done messages to the standard output. – Lissanro Rayen Nov 16 '21 at 00:00
32

bash itself will never actually write any output to your log file. Instead, the commands it invokes as part of the script will each individually write output and flush whenever they feel like it. So your question is really how to force the commands within the bash script to flush, and that depends on what they are.

Chris Dodd
  • 2,920
  • 15
  • 10
  • 50
    I really do not understand this answer. – Alfonso Santiago Aug 21 '14 at 12:35
  • 5
    For a better idea why standard output behaves like this, check out http://stackoverflow.com/a/13933741/282728. A short version—by default, if redirected to a file, stdout is fully buffered; it's written to a file only after a flush. Stderr is not—it's written after every '\n'. One solution is to use the 'script' command recommended by user3258569 below, to have stdout flushed after every line end. – Alex Dec 18 '15 at 05:59
  • 14
    Stating the obvious, and ten years later, but this is a comment, not an answer, and it shouldn't be the accepted answer. – RealHandy Nov 13 '20 at 15:59
  • This additionally is not accurate answer. The stdout CAN STILL BE behind stderr EVEN if the stdout commands BEFORE the stderr commands. So it is not related to `depends on what they are` because they already depends on something else outside the bash and the bash script itself. – Andry Jan 01 '22 at 13:01
8

You can use tee to write to the file without the need for flushing.

/homedir/MyScript 2>&1 | tee some_log.log > /dev/null
fracz
  • 20,536
  • 18
  • 103
  • 149
crenate
  • 3,370
  • 2
  • 20
  • 13
  • 7
    This still buffers the output, at least in my Ubuntu 18.04 environment. The contents eventually get written to the file either way, but I think the OP is asking for a method where they can monitor the progress more accurately before the file is finished writing, and this method doesn't allow for that any more than output redirection does. – mltsy Aug 23 '18 at 20:42
6

This isn't a function of bash, as all the shell does is open the file in question and then pass the file descriptor as the standard output of the script. What you need to do is make sure output is flushed from your script more frequently than you currently are.

In Perl for example, this could be accomplished by setting:

$| = 1;

See perlvar for more information on this.

Greg Hewgill
  • 951,095
  • 183
  • 1,149
  • 1,285
5

Would this help?

tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq 

This will immediately display unique entries from access.log using the stdbuf utility.

Gray
  • 115,027
  • 24
  • 293
  • 354
Ondra Žižka
  • 43,948
  • 41
  • 217
  • 277
3

Buffering of output depends on how your program /homedir/MyScript is implemented. If you find that output is getting buffered, you have to force it in your implementation. For example, use sys.stdout.flush() if it's a python program or use fflush(stdout) if it's a C program.

Midas
  • 46
  • 2
2

Thanks @user3258569, script is maybe the only thing that works in busybox!

The shell was freezing for me after it, though. Looking for the cause, I found these big red warnings "don't use in a non-interactive shells" in script manual page:

script is primarily designed for interactive terminal sessions. When stdin is not a terminal (for example: echo foo | script), then the session can hang, because the interactive shell within the script session misses EOF and script has no clue when to close the session. See the NOTES section for more information.

True. script -c "make_hay" -f /dev/null | grep "needle" was freezing the shell for me.

Countrary to the warning, I thought that echo "make_hay" | script WILL pass a EOF, so I tried

echo "make_hay; exit" | script -f /dev/null | grep 'needle'

and it worked!

Note the warnings in the man page. This may not work for you.

Victor Sergienko
  • 13,115
  • 3
  • 57
  • 91
1

How just spotted here the problem is that you have to wait that the programs that you run from your script finish their jobs.
If in your script you run program in background you can try something more.

In general a call to sync before you exit allows to flush file system buffers and can help a little.

If in the script you start some programs in background (&), you can wait that they finish before you exit from the script. To have an idea about how it can function you can see below

#!/bin/bash
#... some stuffs ...
program_1 &          # here you start a program 1 in background
PID_PROGRAM_1=${!}   # here you remember its PID
#... some other stuffs ... 
program_2 &          # here you start a program 2 in background
wait ${!}            # You wait it finish not really useful here
#... some other stuffs ... 
daemon_1 &           # We will not wait it will finish
program_3 &          # here you start a program 1 in background
PID_PROGRAM_3=${!}   # here you remember its PID
#... last other stuffs ... 
sync
wait $PID_PROGRAM_1
wait $PID_PROGRAM_3  # program 2 is just ended
# ...

Since wait works with jobs as well as with PID numbers a lazy solution should be to put at the end of the script

for job in `jobs -p`
do
   wait $job 
done

More difficult is the situation if you run something that run something else in background because you have to search and wait (if it is the case) the end of all the child process: for example if you run a daemon probably it is not the case to wait it finishes :-).

Note:

  • wait ${!} means "wait till the last background process is completed" where $! is the PID of the last background process. So to put wait ${!} just after program_2 & is equivalent to execute directly program_2 without sending it in background with &

  • From the help of wait:

    Syntax    
        wait [n ...]
    Key  
        n A process ID or a job specification
    
Community
  • 1
  • 1
Hastur
  • 2,470
  • 27
  • 36
0

alternative to stdbuf is awk '{print} END {fflush()}' I wish there were a bash builtin to do this. Normally it shouldn't be necessary, but with older versions there might be bash synchronization bugs on file descriptors.

Brian Chrisman
  • 3,482
  • 1
  • 15
  • 16
-2

I had this problem with a background process in Mac OS X using the StartupItems. This is how I solve it:

If I make sudo ps aux I can see that mytool is launched.

I found that (due to buffering) when Mac OS X shuts down mytool never transfers the output to the sed command. However, if I execute sudo killall mytool, then mytool transfers the output to the sed command. Hence, I added a stop case to the StartupItems that is executed when Mac OS X shuts down:

start)
    if [ -x /sw/sbin/mytool ]; then
      # run the daemon
      ConsoleMessage "Starting mytool"
      (mytool | sed .... >> myfile.txt) & 
    fi
    ;;
stop)
    ConsoleMessage "Killing mytool"
    killall mytool
    ;;
Freeman
  • 5,810
  • 3
  • 47
  • 48
  • This really is not a good answer Freeman since it is very specific to your environment. The OP wants to monitor output not kill it. – Gray Sep 24 '19 at 15:50
-4

I don't know if it would work, but what about calling sync?

forkandwait
  • 5,041
  • 7
  • 23
  • 22
  • 1
    `sync` is a low-level filesystem operation and is unrelated to buffered output at the application level. – Greg Hewgill Sep 15 '09 at 22:48
  • 2
    `sync` writes any dirty filesystem buffers to physical storage, if necessary. This is internal to the OS; applications running on top of the OS always see a coherent view of the filesystem whether or not the disk blocks have been written to physical storage. For the original question, the application (script) is probably buffering the output in a buffer internal to the application, and the OS won't even know (yet) that the output is actually destined to be written to stdout. So a hypothetical "sync"-type operation wouldn't be able to "reach into" the script and pull the data out. – Greg Hewgill Sep 16 '09 at 01:06
-4

well like it or not this is how redirection works.

In your case the output (meaning your script has finished) of your script redirected to that file.

What you want to do is add those redirections in your script.