Update 2019: See "Docker memory limit causes SLUB unable to allocate with large page cache"
I mentioned in "Docker support in Java 8 — finally!" last May (2019), that new evolutions from Java 10, backported in Java 8, means Docker will report more accurately the memory used.
As mbluke adds in the comments:
The resource issues have been addressed in later versions of Java.
As of Java SE 8u131, and in JDK 9, the JVM is Docker-aware with respect to Docker CPU limits transparently.
Starting with Java JDK 8u131+ and JDK 9, there’s an experimental VM option that allows the JVM ergonomics to read the memory values from CGgroups.
To enable it on, you must explicit set the parameters -XX:+UnlockExperimentalVMOptions
and -XX:+UseCGroupMemoryLimitForHeap
on the JVM Java 10 has these set by default and there is no need for the flags.
January 2018: original answer
As any trade-off, it depends on your situation/release cycle.
But do consider also Java might be ill-fitted for a docker environment in the first place, depending on its nature.
See "Nobody puts Java in a container"
So we have finished developing our JVM based application, and now package it into a docker image and test it locally on our notebook. All works great, so we deploy 10 instances of that container onto our production cluster. All the sudden the application is throttling and not achieving the same performance as we have seen on our test system. And our test system is even this high-performance system with 64 cores…
What has happened?
In order to allow multiple containers to run isolated side-by-side, we have specified it to be limited to one cpu (or the equivalent ratio in CPU shares). Unfortunately, the JVM will see the overall number of cores on that node (64) and use that value to initialize the number of default threads we have seen earlier. As started 10 instances we end up with:
10 * 64 Jit Compiler Threads
10 * 64 Garbage Collection threads
10 * 64 ….
And our application,being limited in the number of cpu cycles it can use, is mostly dealing with switching between different threads and does cannot get any actual work done.
All the sudden the promise of containers, “Package once, run anywhere’ seem violated…
So to be specific, how to cope with the amount of data generated when you do build image per release? If you build your app everytime on top of tomcat image, the disk space needed for store the images will grow quickly, right?
2 techniques:
- multi-stage build to make sure your application does not include anything but what is need at runtime (and not any compilation files). See my answer here;
- bind mounts: you could simply copy your wars in a volume mounted by a single Tomcat container.