0

We are processing 50 million data and after processing in the end we are using rank function in pig script and pig job is getting failed while executing rank function and we are getting below error: "org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many counters: 121 max=120"

We have used the below command in pig script but we are still getting same error:

set mapreduce.job.counters.max 1000

I would really appreciate if anyone can get me through this error or can provide me alternative way to use rank function on 50+million processed data.

1 Answers1

0

Check the mapred-site.xml counter limit value.Most likely the limit is set to 120 in tha t file.The file is located in your hadoop home directory ex: $HADOOP_HOME/conf/mapred-site.xml

<property>
    <name>mapreduce.job.counters.limit</name>
    <value>1000</value> -- Most likely this is set to 120 in your case.
</property>

In Hadoop 2.0 its mapreduce.job.counters.max

<property>
    <name>mapreduce.job.counters.max</name>
    <value>1000</value> -- Most likely this is set to 120 in your case.
</property>
nobody
  • 10,892
  • 8
  • 45
  • 63