I run a subprocess from python like this (not my script):
with contextlib.redirect_stdout(log_file):
# ....
processResult = subprocess.run(args,
stdout=sys.stdout,
stderr=sys.stderr
timeout=3600)
and sometimes the process goes crazy (due to an intermittent bug) and dumps so many errors into the stdout/logfile so that it grows to 40Gb and fills up the disk space.
What would be the best way to protect against that? Being a python newbie, I have 2 ideas:
piping the subprocess into something like
head
that aborts it if output grows beyond limit (not sure if this is possible with subprocess.run or do I have to go the low level Popen way)finding or creating some handy IO wrapper class IOLimiter which would throw an error after a given size (couldn't find anything like this in stdlib and not even sure where to look for it)
I suspect there would be some smarter/cleaner way?