6

Our application generates images. The memory consumed by BufferedImage generates an out of memory exception:

java.lang.OutOfMemoryError: Java heap space

This happens with the following line:

BufferedImage result = new BufferedImage(2540, 2028, BufferedImage.TYPE_INT_ARGB);

When checking free memory just before this instruction it shows that I have 108MB free memory. The approach I use to check memory is:

Runtime rt = Runtime.getRuntime();
rt.gc();
long maxMemory = rt.maxMemory();
long usedMemory = rt.totalMemory() - rt.freeMemory();
long freeMem = maxMemory - usedMemory;

We don't understand how the BufferedImage can consume more than 100MB of memory. It should use 2540 * 2028 * 4 bytes, which is ~20 MB.

Why is so much memory consumed when creating the BufferedImage? What we can do to reduce this?

Andy Brown
  • 18,961
  • 3
  • 52
  • 62
Tom
  • 1,375
  • 3
  • 24
  • 45
  • Please don't add *thanks, I appreciate your comments* or similar lines in questions, they're just noise. – Luiggi Mendoza Feb 10 '15 at 20:05
  • Thanks for all the editing guys; would help me much more (and probably others) if someone would have a hint on this. Thanks ! – Tom Feb 10 '15 at 20:13

1 Answers1

2

Asking Runtime for the amount of free memory is not really reliable in a multithreaded environment, as the memory could be used up by another thread right after you measured. Also, you are using maxMemory - usedMemory, which is not the amount of free memory, but rather what the VM thinks it can make available at most - it may be that the host system can not satisfy a request for more memory, while the VM still believes it can enlarge the heap.

It's also fully possible that your VM has 108 MB free, but no 20MB in one chunk is available. The type of BufferedImage you are trying to create is ultimately backed by an int[] array, which must be allocated as a contiguous memory block. That means if no contiguous 20MB block is available on the heap, no matter how much total free memory there is otherwise, you will get an OutOfMemoryError. The situation is further complicated by the garbage collector used - each GC has different strategies for memory allocation; a sizable portion of the heap may be set aside for thread local memory allocation.

Without any information how large the heap is in total, which GC you are using (and which VM for the matter), there are too many variables to point a finger on a culprit.


Edit: Find out which GC is used (Java 7 (JDK 7) garbage collection and documentation on G1) and have a glance on its specific pros and cons - especially what capabilities it offers in terms of heap compaction and how large its generations are by default. That would be the parameters to play with. Running the application with GC messages on may also provide insight on whats going on.

Considering your heap is only 900MB in size, 100MB free means your pretty close to the limit already - my first go to cure would be to simply assign the VM a much larger heap, lets say 2GB. If you need to conserve memory your only bet is tuning the GC parameters (possibly select another GC) - and to be honest I have no experience with that. There are plenty of articles on the topic of GC tuning available, though.

dlinsin
  • 19,249
  • 13
  • 42
  • 53
Durandal
  • 19,919
  • 4
  • 36
  • 70
  • Thanks. To clarify some elements: I'm using Oracle JRE 7 default garbage collector. System has 8GB of RAM with plenty empty and JRE is launched with 900MB of heap (-Xmx900m). Application is not multi-threaded at that point, with the exception of the EDT thread (Swing application) but which is not doing anything particular at this stage (no GUI change). I was hoping that calling gc() would free a continuous block of memory as I've read in some places. – Tom Feb 10 '15 at 20:16
  • @Tom I *believe* filling the heap to 90% is cutting it a little too close, but thats just a gut feeling; I have no in-depth knowledge of GC specifics. – Durandal Feb 10 '15 at 20:40