-1

I have seen lots of java lang outofmemoryerror java heap space in elasticsearch but I could find any help page that describes the possible reasons behind these errors in elastic search. I am getting errors for example:

(2015-04-09 13:56:47,527 DEBUGaction.index Emil Blonsky observer: timeout notification from cluster service. timeout setting 1m, time since start 1m) Caused by: java.lang.OutOfMemoryError: Java heap space:
Ashish Pancholi
  • 4,569
  • 13
  • 50
  • 88
  • It could be better if some one even drop a single link that describe the reason of `java lang outofmemoryerror java heap space in elasticsearch` in comments with down-vote. – Ashish Pancholi Jun 12 '15 at 12:48
  • 1
    The downvotes are because your question is like going to a doctor and asking him what are the possible causes of a headache .... – Stephen C Jun 12 '15 at 12:53
  • So what will you do if you google before going to the doctor and did not find a single page that match with your symptoms? then did you take medicine or go to the doctor? Stephen! I am not developer of Elastic search. It is open source, distributed, RESTful, search and analytics engine. – Ashish Pancholi Jun 12 '15 at 13:09
  • When I go to the doctor, I tell him what what the symptoms are. You need to explain what you are doing. – Stephen C Jun 12 '15 at 14:42

2 Answers2

2

Possible reasons (some of them):

  1. putting too much data into that memory, especially because of fielddata (used for sorting, aggregations mostly)
  2. configuration mistake, where you thought you set something for heap size, but it was wrong or in the wrong place, and your node starts with the default and that value (min 256MB, max 1GB) is not enough
  3. putting too much data because of very heavy indexing, for example a bulk size that's way too large
  4. querying using a very large (depends on how much memory you have, but a 2 billion will surely bring the cluster down) "size"
  5. especially for master nodes (master eligible nodes) that don't have enough memory - the cluster state is a likely culprit. The cluster state can get very large if there are a lot of aliases defined for each index.

An OOMed node needs to be restarted, btw.

Andrei Stefan
  • 51,654
  • 6
  • 98
  • 89
1

I can't speak to your question directly, but there are a couple of approaches to this type of problem that I've found useful in the past:

  1. Use JVisualVM to inspect the contents of the heap. JVisualVM is a free tool that's shipped with the JDK. It lets you inspect details of running JVMs, including a full dump of the heap.

  2. If you suspect the error is simply due to the JVM not having enough memory available, you can increase it manually via heap parameters reference.

T.D. Smith
  • 984
  • 2
  • 7
  • 22