1

I'm having trouble with a Jetty 9 server application that seems to go into some kind of resting state after a longer period of idleness. Normally the memory usage of the Java process is ~500 MB, but after being idle for some time it seems to drop down to less than 50MB. The first request that comes takes up to several seconds to respond whereas requests are normally on the scale of tens of milliseconds. But after one or two requests it seems like the application is back to it's normal responsive state.

I'm running on the 32-bit Oracle Java 8 JVM. My JVM configuration is very basic:

java -server -jar start.jar

I was hoping that this issue might be solvable through JVM configuration. Does anyone know if there's any particular parameter to disable this type of behavior?

edit: Based on the comment from Ivan, I was able to identify the source of the issue. Turns out Windows was swapping parts of the Java process out to disk. See my own answer below for a description of my solution.

Rune Aamodt
  • 2,551
  • 2
  • 23
  • 27
  • You could try adding `-Xms500m` – Elliott Frisch Apr 25 '17 at 08:33
  • 3
    It looks like your memory was swapped out(or other OS level stuff). See the same issues here http://stackoverflow.com/questions/43464971/jvm-jit-deoptimization-after-idle/43466815#43466815 For more information please provide OS info and swap usage. – Ivan Mamontov Apr 25 '17 at 13:44

1 Answers1

1

Based on the comment from Ivan, I was able to identify the source of the issue. Turns out Windows was swapping parts of the Java process out to disk. This was clearly visible when comparing the private working set to the commit size in the task manager.

My solution to this was two-fold. First, I made a simple scheduled job inside my server app that runs every minute and does a simple test run to make sure that the important services never go inactive for long periods. I'm hoping this should ensure that Windows doesn't regard the related pages as inactive.

Afterwards, I also noticed that the process was executing with "Below normal" priority. So I changed the script that starts the server to ensure that it's running with "High" priority going forward. This seems likely to affect swapping behavior and may very well also have been enough to resolve the issue on it's own, but I only found it after already deploying my first solution so that remains unclear. In any case, everything seems to be working as it should now.

Rune Aamodt
  • 2,551
  • 2
  • 23
  • 27