I have a program that creates an object for each file in a directory (sub)tree. In these days of larger and larger disks, there is no way to know how many files that will be, esp. not a few years (months?) from now.
My program is not enterprise-critical; it is a tool for a user to analyze that subtree. So it is acceptable to tell the user that there is not enough memory in this environment to operate on that subtree. He could possibly do what he wants by choosing subtrees of that subtree.
But it is not acceptable for the program to just die, or throw a stacktrace, or other things only a programmer can love. I would like the program to give the user some reasonable feedback and let him control what he does about it.
I have read a number of the posts here on StackOverflow about OOM exceptions, and in the main I agree with a number of points: badly designed apps, memory leaks, etc., are all problems that need to be thought of. But in this case, I might have had somebody attempt to use my tool on a 10T disk that just has more files than the program prepared to analyze. And I'm not trying to write the tool so that it operates on every possible subtree.
I have seen suggestions that OOM can just be caught "like any other exception"; unfortunately, this is not a robust way to do things. When OOM gets thrown, some thread is likely to have died already, and we cannot tell which one it will be, and we can't restart it. So if it happens to be one critical to Swing, for instance, then we are out of luck.
So my current thinking is that my program will need to take occasional looks (at least) at the amount of free memory available and stop itself if that gets below some threshold. I can test things to determine a threshold that allows me to output a dialog box with a message and then wipe all my references to my objects.
But if I'm missing something, or there's a better way to go about things, I'd like to know it.