0

I am using hadoop 2.8 and running stream Job which reads 100MB csv and perform login on it, but i got error in console:

Container [pid=20975,containerID=container_1502190583079_0006_01_000002] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 1.7 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1502190583079_0006_01_000002 :

I have no idea how to fix, can anyone help, thanks in advance.

1 Answers1

0

You should set java.opts of map/reduce as 75%~80% of memory of map/reduce container. If still not work, increase memory of map/reduce container.

Configuration for map/reduce container memory(mapred-site.xml)

  mapreduce.map.memory.mb: 1024
  mapreduce.reduce.memory.mb: 2048

Configuration for heap size of map/reduce java process(mapred-site.xml)

  mapreduce.map.java.opts: -Xmx768m
  mapreduce.reduce.java.opts: -Xmx1536m

Details you can find in this answer.