I'm currently working on a project that makes use of hadoop (2.7.0) I have a two node cluster configured and working (for the most part). I can run mapper / reducer jobs manually withoud any problems. But when I try to start a job with hadoopy I get a error. After debugging the error I see it origionates from the following command that is executed by hadoopy:
hadoop fs -mkdir _hadoopy_tmp
This yields the error:
mkdir: '_hadoopy_tmp': No such file or directory
When doing it manually mkdir works fine if I start my file direcotry name with a '/' in front of it. If I don't start with the '/' I get the same error as above. Same goes with the ls command (ls / gives me a result, ls . gives me a error that there is no such file or directory). I'm guessing that I screwed up in the configuration of hadoop somewhere. I just cant figure out where.
EDIT: to clearify: I'm aware that you should use the mkdir command with a direct path (ea / in front of it). When interacting with hadoop trough the terminal I do this. However the hadoopy framework seems not to do it (it throws the error as shown above). my question is: is there a fix/workaround for this in hadoopy, or do I have to rewrite there source code?