5

I am running the famous wordcount example. I have a local and prod hadoop setup. The same example is working in prod, but its not working locally. Can someone tell me what should I look for. The job is getting stuck. The task logs are:

~/tmp$ hadoop jar wordcount.jar WordCount /testhistory /outputtest/test
Warning: $HADOOP_HOME is deprecated.

13/08/29 16:12:34 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/08/29 16:12:35 INFO input.FileInputFormat: Total input paths to process : 3
13/08/29 16:12:35 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/08/29 16:12:35 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/29 16:12:35 INFO mapred.JobClient: Running job: job_201308291153_0015
13/08/29 16:12:36 INFO mapred.JobClient:  map 0% reduce 0%

Locally hadoop in running as pseudo distributed mode. All the 3 processes, namenode, datanode, jobtracker is running. Let me know if some extra information is required.

Abhishek Kumar
  • 3,328
  • 2
  • 18
  • 31
  • Anything interesting in JT/TT logs? – Tariq Aug 30 '13 at 06:09
  • JobTracker logs: `http://pastebin.com/jY1CAQaA` I don't see any issues in the log file. – Abhishek Kumar Aug 30 '13 at 06:21
  • Thank you for providing the info. What about h/w?Is it same as the prod cluster?Try to monitor and see if there is some h/w related issue, most probably RAM. – Tariq Aug 30 '13 at 06:29
  • There is hardware differences. But I am not able to think, why this can be an issue. This simple task is running for a very small file (2-3 KBs) which is way too smaller than available RAM. – Abhishek Kumar Aug 30 '13 at 06:35
  • Oh..absolutely. One more thing, you have written that NN, DN and JT are running fine. What about the TT? – Tariq Aug 30 '13 at 06:39
  • There is not tasktracker running. I don't have any details about this, as I have just started learning hadoop. I am googling it for more details. But suggestions and details regarding tasktracker, from you, will be welcomed. – Abhishek Kumar Aug 30 '13 at 06:53
  • 1
    You must have a running TT. TT is the daemon that is actually going to run your mappers and reducers. Without it, you can't go ahead. Please make sure it is running fine. Or show me the TT logs. – Tariq Aug 30 '13 at 06:55
  • 2
    Awesome, it worked. I started the tasktracker and everything worked. Thanks :). If you want you can post it as a solution, and I will accept it. – Abhishek Kumar Aug 30 '13 at 07:03

5 Answers5

3

The tasktracker seems to be missing.

Try:

hadoop tasktracker &
user31986
  • 1,558
  • 1
  • 14
  • 29
2

In Hadoop 2.x this problem could be related to memory issues, you can see it in MapReduce in Hadoop 2.2.0 not working

Community
  • 1
  • 1
nanounanue
  • 7,942
  • 7
  • 41
  • 73
1

I had the same problem and this page helped me: http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide/

Basically I solved my problem using the following 3 steps. The fact is that I had to configure much more memory I really have.

1) yarn-site.xml

  • yarn.resourcemanager.hostname = hostname_of_the_master
  • yarn.nodemanager.resource.memory-mb = 4000
  • yarn.nodemanager.resource.cpu-vcores = 2
  • yarn.scheduler.minimum-allocation-mb = 4000

2) mapred-site.xml

  • yarn.app.mapreduce.am.resource.mb = 4000
  • yarn.app.mapreduce.am.command-opts = -Xmx3768m
  • mapreduce.map.cpu.vcores = 2
  • mapreduce.reduce.cpu.vcores = 2

3) Send these files across all nodes

mountrix
  • 1,126
  • 15
  • 32
1

Except for hadoop tasktracker & and any other issues. Please check you code and make sure that there is no infinite loop or any other bugs. Maybe there are some bugs in your code!

GoingMyWay
  • 16,802
  • 32
  • 96
  • 149
0

If this problem is coming when using Hive queries then do check if you are joining two very big tables without leveraging partitions. Not using partitions may lead to long running full table scans and hence stuck at map 0% reduce 0%.

abhiieor
  • 3,132
  • 4
  • 30
  • 47