I've got a python application which uses a lot of memory, which should be fine as I handle it within loop with try / except MemoryError
. Unfortunately, the MemoryError exception is never raised - before that happens, python is killed (on Debian Linux) by the OOM Killer.
Question is why... and how I can catch the error within Python. If I can catch it I have an easy mitigation, but without the exception I can't invoke my mitigation.
For info, the application is processing videos, with each frame being ~15MB numpy objects. If I run out of memory I'm happy to reduce the frame rate and try again.
I've also tried tracking memory usage as I load each frame, using psutil.available, but the process is killed with still ~350MB showing as available memory (of 2GB total). I assume that's a fragmentation problem.
Problem I therefore have is that I can arbitrarily set some limits, e.g. if I get to <500MB of free memory then start again with a lower frame rate, but it all feels a bit arbitrary and not very robust. If the application or perhaps the OS or the hardware changes, I might find that next time it crashes out at 501MB remaining, or something... that's why I'd rather handle it via the MemoryError exception.
Sadly this doesn't seem to be a common issue, with "python invoked oom-killer exception" giving me just two pages of Google search results! Mostly the answer on here previously has been "don't use so much memory", which isn't very helpful - in my case I want to use as much as available, but am happy to use less if I need to. Just that Python doesn't give me the opportunity to do so before it gets killed!
Any thoughts greatly appreciated.