I'm trouble shooting a memory issue I have in a spring boot app and I'm pretty new to deploying java apps in docker containers. I have an app running in Google Cloud's Cloud Run.
The Container has 4GB of memory available, but when I add some logging code to my main spring boot class
logger.info("heap size: " + Runtime.getRuntime().totalMemory());
logger.info("max heap size: " + Runtime.getRuntime().maxMemory());
I find out that my JVM running inside my container only sees a max heap size of ~1GB. Digging into this it seems like the default behavior after JDK 10 is that the JVM in the container will set the max heap to 1/4 of the memory available to the container.
Why is this the default behavior? It seems wasteful to not allow the heap to expand any further. Is it so that the base image code may have enough ram to do what it needs? My base image is
FROM eclipse-temurin:17-jdk-alpine
I'm only running one spring-boot api app in the container but frequently need to stream massive amounts of json to the browser (hundreds of Mb at a time). I think I can override the default -Xmx heap setting by providing a JAVA_OPTS ENV variable to the container and using it in my docker file, but how do I know what a safe value is for this? Do base images typically post how much RAM they need to maintain basic functionality?