0

In Java, I have heard time and time again that too much memory can impact performance negatively. When I state memory, I am referencing virtual memory. Specifically I hear this more often than not when it comes to java -xmx.

I understand assigning too much virtual memory and systems with a low amount of RAM, but other engineers I have talked with have told me that even on higher-powered machines, you still want to ensure xmx is not overzealous. Is there a standard calculation everyone uses to determine this information such as on a 16GB machine you only use 1/4 for xmx? 1/8th for page file? I ham having a hard time grasping why too much memory is a bad thing.

I have been searching all over the internet and so far my google-ninja skills have not been sufficient to find a good answer.

I have looked at Virtual Memory Usage from Java under Linux, too much memory used and this does not really appear to answer my question (or I am just slow and need to re-read it a couple of times). I understand that systems will pull a chunk of memory and reserve it for themselves and I understand increasing xmx if out of memory errors are appearing.

My understanding is that xmx should not just be given a higher memory number (ex. 8096) unless the application requires it. I fail to understand what is so bad about assigning (ex. 8096), even if the application runs fine with (ex. 4096) etc.

Community
  • 1
  • 1
IT_User
  • 729
  • 9
  • 27
  • 2
    As always, this depends on your implementation, but Java will tend to use all the memory you give it before doing its most expensive kind of garbage collection. The fewer, larger garbage collections may be more expensive than more smaller collections. Also, the cache hit ratio might be smaller. – antlersoft Apr 05 '16 at 21:23
  • @eliasah so form reading what you tagged, GC is the reason why everyone states that not to be overzealous when assigned **xmx**? – IT_User Apr 05 '16 at 21:29
  • 1
    GC and Oop = Ordinary object pointers actually. GC can be very expensive on intense applications. – eliasah Apr 05 '16 at 21:31
  • 1
    @eliasah Thank you for the information. I will look more into both of those tonight when I have additional free-time – IT_User Apr 05 '16 at 21:32
  • @antlersoft Thank you very much for the information. Greatly appreciated :) – IT_User Apr 05 '16 at 21:32
  • 1
    I suggest also that you take a look for the oracle documentation about this issue. It's very well explained. (I lost the link sorry) – eliasah Apr 05 '16 at 21:33
  • 1
    The GC needs random access to memory assumes all the used heap in main memory. If you have a portion of memory on disk, this can slow the time to perform a GC by factor of 1000x or more and kill the machine, i.e. prevent anything else happening on the machine. – Peter Lawrey Apr 06 '16 at 10:36
  • 1
    Accessing main memory can take 70 nano-seconds cycles, but accessing data swapped to disk can take 8 milli-seconds which is more than 10,000x slower. – Peter Lawrey Apr 06 '16 at 10:37

0 Answers0