2

I'm trying to predict the heap memory requirements change for the case when I run my Java application unchanged in a JVM configured to use more than 32GB of memory.

I expect that there will be significant memory overhead for the same amount of "useful" objects I keep in memory just after reconfiguring the Xmx parameter from 32GB to 64GB.

I've tried to simulate and estimate the difference by applying the -XX:-UserCompressedOps on my local machine running with small heap (8GB) but could not come to the conclusion yet. According to runtime calculations my objects take same amount of memory in both cases. The used heap with switched off optimization tend to be a bit more but in no way twice more as I could expect reading some explanations.

In my use case I simply keep a large amount of relatively big POJO objects 100-1K each in heap for the entire life time of the program.

Is there a rule of thumb for how the memory requirements grow just by crossing the 32GB limit (when 32bit optimizations do not apply any more)?

Sergey Shcherbakov
  • 4,534
  • 4
  • 40
  • 65
  • So the question is what happens when you turn off CompressedOops? Or are there any other optimizations that the JVM applies when you have <32GB heap memory? – Thilo Jan 16 '15 at 07:49
  • I only test with only this one parameter set/unset, rest are deafult HotSpot 1.7 paramters. I'm also trying to compare situation when only max heap is increased from less than 32GB to more than that. – Sergey Shcherbakov Jan 16 '15 at 08:05
  • It is said (http://stackoverflow.com/a/13549938/14955) that losing CompressedOops means that it makes no sense to have a heap size between 32GB and 48GB. You need to go beyond 48GB to make up for the longer pointers. – Thilo Jan 16 '15 at 12:45
  • 1
    And another thread (http://stackoverflow.com/a/11054851/14955) mentions an intriguing new Java 8 option (-XX:ObjectAlignmentInBytes) that lets you align objects at 16 bytes instead of 8, so that you can use CompressedOops with up to 64GB (instead of just 32GB). – Thilo Jan 16 '15 at 12:51
  • Thanks @Thilo! I've heard about the argument of 32-48GB. That's actually what scares me, though I can't reproduce bad impact yet. – Sergey Shcherbakov Jan 16 '15 at 14:28
  • @Thilo, very interesting Java 8 option indeed. But this won't save me, I need a solution for bigger heaps (wdyt about maximum heap size being 96-128GB for GC to keep up) – Sergey Shcherbakov Jan 16 '15 at 14:30

1 Answers1

1

but in no way twice more as I could expect reading some explanations.

My understanding is that disabling CompressedOops would only double pointer sizes (reference types) and not primitive types, especially not Strings, bytes arrays and the like. So if your heap is dominated by arrays of primitive types then the increase might be difficult to notice.

Alignment requirements also makes the size differences not as straight-forward because the larger pointers might simply end up filling some alignment padding.

the8472
  • 40,999
  • 5
  • 70
  • 122
  • This answer sums it up. Oracle details: https://wikis.oracle.com/display/HotSpotInternals/CompressedOops – spudone Jan 16 '15 at 18:00
  • Right, this is not a rule of thumb for estimation though. I was thinking about using jmap, counting the number of objects and deriving the numbers out of it. – Sergey Shcherbakov Jan 19 '15 at 08:25
  • that'll only give you the lower bound since each object will have to be referred to once/as at least one pointer itself. On the other end you would see your heap almost exactly double if it solely consisted of reference arrays. – the8472 Jan 19 '15 at 09:40