I am working on a problem with Optaplanner and I have been reading about how the memory fingerprint should look like. I have been doing this because an OOM error keeps coming up when scaling up the problem.
The documentation shows RAM memory usage during solving. I understand the memory grows a little compared to a baseline due to the dataset. A snapshot of the memory evolution in my problem during solving looks as if it is incrementally growing:
I have also been using VisualVM and these objects come from Drools. But I could not go into any more depth.
So my question is first related to the theoretical memory fingerprint. What do the "peaks" correspond to? Are they directly related to solving steps?
Also, could this continous memory growing in my problem be because I am formulating the Drools constraints inefficiently? Or should I be focusing elsewhere?