3

I have a hadoop cluster setup with the rmr2 and rhdfs packages installed. I've been able to run some sample MR jobs through the CLI and through rscripts. For example, this works:

#!/usr/bin/env Rscript
require('rmr2')

small.ints = to.dfs(1:1000)
out = mapreduce( input = small.ints, map = function(k, v) keyval(v, v^2))
df = as.data.frame( from.dfs( out) )
colnames(df) = c('n', 'n2')
str(df)

Final Output:

DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

'data.frame':   1000 obs. of  2 variables:
 $ n : int  1 2 3 4 5 6 7 8 9 10 ...
 $ n2: num  1 4 9 16 25 36 49 64 81 100 ...

I'm now trying to move on to the next step of writing my own MR job. I have a file (`/user/michael/batsmall.csv') with some batting statistics:

aardsda01,2004,1,SFN,NL,11,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,11
aardsda01,2006,1,CHN,NL,45,43,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,45
aardsda01,2007,1,CHA,AL,25,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2
aardsda01,2008,1,BOS,AL,47,5,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,5
aardsda01,2009,1,SEA,AL,73,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
aardsda01,2010,1,SEA,AL,53,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

(batsmall.csv is an extract of a much larger file, but really I'm just trying to prove I can read and analyze a file from hdfs)

Here's the script I have:

#!/usr/bin/env Rscript

require('rmr2');
require('rhdfs');

hdfs.init()
hdfs.rmr("/user/michael/rMean")

findMean = function (input, output) {
  mapreduce(input = input,
            output = output,
            input.format = 'csv',
            map = function(k, fields) {
              myField <- fields[[5]]
              keyval(fields[[0]], myField)
            },
            reduce = function(key, vv) {
              keyval(key, mean(as.numeric(vv)))
            }
    )
}

from.dfs(findMean("/home/michael/r/Batting.csv", "/home/michael/r/rMean"))
print(hdfs.read.text.file("/user/michael/batsmall.csv"))

This fails every time and looking at the hadoop logs it seems to be a Broken Pipe error. I cannot figure out what's causing this. As other jobs work I would think it's an issue with my script, not my configuration, but I can't figure it out. I am admittedly and R novice and relatively new to hadoop.

Here's the job output:

[michael@hadoop01 r]$ ./rtest.r
Loading required package: rmr2
Loading required package: Rcpp
Loading required package: RJSONIO
Loading required package: methods
Loading required package: digest
Loading required package: functional
Loading required package: stringr
Loading required package: plyr
Loading required package: rhdfs
Loading required package: rJava

HADOOP_CMD=/usr/bin/hadoop

Be sure to run hdfs.init()
Deleted hdfs://hadoop01.dev.terapeak.com/user/michael/rMean
[1] TRUE
packageJobJar: [/tmp/Rtmp2XnCL3/rmr-local-env55d1533355d7, /tmp/Rtmp2XnCL3/rmr-global-env55d119877dd3, /tmp/Rtmp2XnCL3/rmr-streaming-map55d13c0228b7, /tmp/Rtmp2XnCL3/rmr-streaming-reduce55d150f7ffa8, /tmp/hadoop-michael/hadoop-unjar5464463427878425265/] [] /tmp/streamjob4293464845863138032.jar tmpDir=null
12/12/19 11:09:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/12/19 11:09:41 INFO mapred.FileInputFormat: Total input paths to process : 1
12/12/19 11:09:42 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-michael/mapred/local]
12/12/19 11:09:42 INFO streaming.StreamJob: Running job: job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:09:42 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:09:43 INFO streaming.StreamJob:  map 0%  reduce 0%
12/12/19 11:10:15 INFO streaming.StreamJob:  map 100%  reduce 100%
12/12/19 11:10:15 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:10:15 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:10:15 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:10:15 ERROR streaming.StreamJob: Job not successful. Error: NA
12/12/19 11:10:15 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, in.folder = if (is.list(input)) { :
  hadoop streaming failed with error code 1
Calls: findMean -> mapreduce -> mr
Execution halted

And a sample exception from the job tracker:

ava.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:572)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:136)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
Ilion
  • 6,772
  • 3
  • 24
  • 47

1 Answers1

3

You need to inspect the stderr of the failed attempt. The jobtracker web UI is the easiest way to get there. Educated guess is that fields is a data frame and you are accessing it like a list, possible but unusual. Errors may follow indirectly from that.

Also we have a debugging document on the RHadoop wiki with this an many more suggestions.

Finally we have a dedicated RHadoop google group where you can interact with a large number of enthusiastic users. Or you can be on your own on SO.

Andrie
  • 176,377
  • 47
  • 447
  • 496
piccolbo
  • 1,305
  • 7
  • 17
  • Not sure I'm on my own on SO, that seems like a weird comment, but you did get it right that I wasn't accessing fields right. I changed the input.format to text then did a split and everything works. I'll have to learn more about the csv input.format. Thanks! – Ilion Dec 19 '12 at 20:21
  • @Ilion: Doesn't seem like a weird comment to me to me. Antonio is just saying that there is a dedicated venue for questions about 'rmr' and 'RHadoop' and he thinks it could be more effective for you to post there and that you might get further useful tips by searching that Archive. – IRTFM Dec 19 '12 at 20:41