8

The JVM Memory Pressure of my AWS Elasticsearch cluster has been increasing consistently. The pattern I see for the last 3 days is that it adds 1.1% every 1 hour. This is for one of the 3 master nodes I have provisioned.

All other metrics seem to be in the normal range. The CPU is under 10% and there are barely any indexing or search operations being performed.

I have tried clearing the cache for fielddata for all indices as mentioned in this document but that has not helped.

Can anyone help me understand what might be the reason for this?

Step Pattern

Pratik Mandrekar
  • 9,362
  • 4
  • 45
  • 65
  • Ideally this should not happen when there is no activity on the cluster. Can you double check and see if any of this from https://aws.amazon.com/premiumsupport/knowledge-center/high-jvm-memory-pressure-elasticsearch/ causing the leak. If nothing helps then you need to talk to AWS Support – Prabhakar Reddy Aug 22 '20 at 17:19

1 Answers1

6

Got this answer from AWS Support

I checked the particular metric and can also see the JVM increasing from the last few days. However, I do not think this is an issue as JVM is expected to increase over time. Also, the garbage collection in ES runs once the JVM reaches 75% (currently its around 69%), after which you would see a drop in the JVM metric of your cluster. If JVM is being continuously > 75 % and not coming down after GC's is a problem and should be investigated.

The other thing which you mentioned about clearing the cache for fielddata for all indices was not helping in reducing JVM, that is because the dedicated master nodes do not hold any indices data and their related caches. Clearing caches should help in reducing JVM on the data nodes.

Pratik Mandrekar
  • 9,362
  • 4
  • 45
  • 65