0

Firstly my setup details.
I have a remote linux based server where I am running my containerised docker application which has java on jdk 11 and have photon os as the base image(https://vmware.github.io/photon/assets/files/html/3.0/Introduction.html).
Queries

  1. Now while performing a memory footprint scan using docker stats command I see the following values
CONTAINER ID   NAME                                   CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O        PIDS
5f683918eab7   some-random-agent                      0.31%     474.2MiB / 476.8MiB   99.44%    71.3MB / 2.51MB   45.4MB / 197kB   58

  1. Here we can see a memory consumption of 99.4% aprox. So my first question is what does this 99.4% memory signifies, from my understanding this all memory has been consumed by all the tenants which are running in our container, this is a sum up of the memory consumption of all the applications running in container. Am I correct ?

  2. To gain further insights, I decided to take a heap dump of my Java application. However, the results are a bit perplexing

enter image description here

enter image description here

  1. So from the above screenshots I see the memory footprints are around 70Mbs I am not able to understand 474.2Mb - 70Mb (I understand this 474Mb is complete container but even inside the container java application is the top memory consuming process with 470Mb memory). Who is taking this much of space. In the above screenshots too the major contributors are system class loaders, I searched for my components in dominator tree they are even less than 10Mb in all. So is this a expected behaviour in java.

  2. One of my guess is that JDK somehow kicks in some process in the container as soon as the java application triggers which ends up taking this much of space. Is there someway do analysis of that part. Note that I already tried looking into system proc files there also I see some random addresses with some huge memory consumptions but I wasn't able to understand what they are and where do they belong as they weren't annotated properly. There sum up was also almost 474Mbs

  3. On the other hand with the same base image I had few of my other golang build application files which I ran(consider it to be same business logic as the java ap) and they were falling in range of 20-30mb(docker stats memory footrprint). So I am kind of very confused on these things that who is victim here. All the stars align up to say java is victim here but, what part is victim in java that I am not able to understand.

So any help on the above would be much appreciable

Vaibhav Jain
  • 61
  • 1
  • 6
  • 1
    Does this answer your question? [Java 11 application as lightweight docker image](https://stackoverflow.com/questions/53669151/java-11-application-as-lightweight-docker-image) – Valerij Dobler Jun 06 '23 at 07:07
  • 1
    Can you share your JVM args for the process? – xyz Jun 06 '23 at 07:08
  • 1
    It will be helpful also to see picture of used heap for some period of time – xyz Jun 06 '23 at 07:18
  • Try both of these: 1- run your application in host without docker and compare memory 2- run another sample java application to check if it consumes huge memory too – Hana Bzh Jun 06 '23 at 07:40
  • Thanks @xyz, for the response these are the args -Dopentracing.spring.cloud.log.enabled=false -Dopentracing.jaeger.udp-sender.max-packet-size=1499 -Dopentracing.jaeger.log-spans=false -Dlog4j2.formatMsgNoLookups=true -Xmx357m. And answering to your 2nd question I have collected these after 48hours of runtime :) . – Vaibhav Jain Jun 06 '23 at 08:47
  • @HanaBzh , I tried that already, where I connected my application to the visual VM while running it locally. In that scenarion I could capture the heap memory too but results were exactly same as the above heap dump memory i.e memory was in range of 70 to 100Mbs. I didn't saw a runtime spikes or anything. – Vaibhav Jain Jun 06 '23 at 08:51
  • 1
    `-Xmx357m` configures the JVM up to 357MB of heap memory, plus whatever native memory usage the JVM needs. I'm guessing this adds up to 400MB+ – aled Jun 06 '23 at 12:45
  • @aled but doesn't xmx357 also means the maximum amount of memory that is allocated to the Java heap. I believe its not like fixed memory allocated to heap. So I don't know how this could be the problem. But let me give this a try – Vaibhav Jain Jun 12 '23 at 08:04
  • And @aled , even if I consider that due to this `-Xmx357m` , we are seeing memory spike why it isn't reflected in heap dump – Vaibhav Jain Jun 12 '23 at 08:05
  • 1
    Heap could start with less memory and increase until maximum. Non heap memory doesn't appear in a heap dump. – aled Jun 12 '23 at 10:01

0 Answers0