0

Suppose I have a console program that outputs trace debug lines on stdout, that I want to run on a server.

And then I do:

./serverapp > someoutputfile

If I need to see how the program's doing, I would just log into the server and do:

tail -f someoutputfile

However, understandably over time, someoutputfile would become pretty big.

Is there a way to make it so that someoutputfile is limited to a certain size, and only the most recent parts of it?

I mean, the hard way would be to make a custom script/program that cycles the output between different files, but that seems like overkill.

kamziro
  • 7,882
  • 9
  • 55
  • 78

2 Answers2

1

You can truncate the log file. One way to do this is to type:

>someoutputfile

at the shell command-line. It's a redirect with no output and it will erase all the contents of the file.

The tricky bit here is that any program writing to that file will continue to write into the file at its last output position. So the file will immediately gain a "hole" from 0 to X bytes, where X is the output position.

In most Linux file systems these holes result in sparse files, which don't actually use the space in the hole. So the file may contain many gigabytes of 0's at the beginning but only use 500 KB on disk.

Another way to do fast logging is to memory map a file on disk of fixed size: 16 MB for example. Then the logging writes into a memory pointer which wraps around when it reaches the size limit. It then continues to write at the front of the file. It's a good idea to have some kind of write position marker. I use <====>, for example. I find this method to be ridiculously fast and great for debug logging.

Zan Lynx
  • 53,022
  • 10
  • 79
  • 131
0

I haven't used it, but it gets good reviews here on SO, try logrotate

A more general discussion of managing output files may show you that a custom script/solution is not out of the question ;-) : Problem with Bash output redirection

I hope this helps.

Community
  • 1
  • 1
shellter
  • 36,525
  • 7
  • 83
  • 90