1

I have configured Apache Hadoop 2.7.x with 3 workers. I have faced some exception like

java.lang.OutOfMemoryError: GC overhead limit exceeded

After some search, I have found that I should increase my JVM heap size. For this there are three parameters in hadoop namely 1. mapred.child.java.opts, 2. mapreduce.map.java.opts, 3. mapreduce.reduce.java.opts. According to my understanding, the last two parameters are mapreduce jvm i.e., the jar that I created to run on hadoop. I have set these parameters about 0.8% of mapreduce.map.memory.mb and mapreduce.reduce.memory.mb respectively.

Now the problem is what max value I can set for mapred.child.java.opts and where it is being used. Is this the problem that GC is reporting or map/reduce heap.

Also what is the relation of map-reduce JVM with container in context of resources i.e., memory especially. What max value I can give for mapred.child.java.opts in a similar way as for mapreduce.amp.java.opts; it should not be more that mapreduce.map.memory.mb. I have reviewed a similar post but unfortunately It could not clarify my understanding.

Hafiz Muhammad Shafiq
  • 8,168
  • 12
  • 63
  • 121

0 Answers0