4

I am working on a problem with Optaplanner and I have been reading about how the memory fingerprint should look like. I have been doing this because an OOM error keeps coming up when scaling up the problem.

The documentation shows RAM memory usage during solving. I understand the memory grows a little compared to a baseline due to the dataset. A snapshot of the memory evolution in my problem during solving looks as if it is incrementally growing: Memory Evolution during Local Search Solving for my problem I have also been using VisualVM and these objects come from Drools. But I could not go into any more depth.

  1. So my question is first related to the theoretical memory fingerprint. What do the "peaks" correspond to? Are they directly related to solving steps?

  2. Also, could this continous memory growing in my problem be because I am formulating the Drools constraints inefficiently? Or should I be focusing elsewhere?

pineapplw
  • 71
  • 3
  • Could be a number of things. It will be hard to solve this through StackOverflow - this is something for an actual, deep support engagement. To answer the questions: 1. The peaks will likely be just before the GC garbage collects (probably stop-the-world GCs), not related to steps. 2 The growings could indicate a memory leak - anywhere from your constraints code, how optaplanner's planning cloning interacts with your code, how drools interacts with your code, or other reasons. – Geoffrey De Smet Dec 01 '20 at 13:31

0 Answers0