33

I have a Hadoop cluster setup and working under a common default username "user1". I want to put files into hadoop from a remote machine which is not part of the hadoop cluster. I configured hadoop files on the remote machine in a way that when

hadoop dfs -put file1 ...

is called from the remote machine, it puts the file1 on the Hadoop cluster.

the only problem is that I am logged in as "user2" on the remote machine and that doesn't give me the result I expect. In fact, the above code can only be executed on the remote machine as:

hadoop dfs -put file1 /user/user2/testFolder

However, what I really want is to be able to store the file as:

hadoop dfs -put file1 /user/user1/testFolder

If I try to run the last code, hadoop throws error because of access permissions. Is there anyway that I can specify the username within hadoop dfs command?

I am looking for something like:

hadoop dfs -username user1 file1 /user/user1/testFolder
Amit Joshi
  • 15,448
  • 21
  • 77
  • 141
reza
  • 1,188
  • 3
  • 17
  • 32

5 Answers5

90

If you use the HADOOP_USER_NAME env variable you can tell HDFS which user name to operate with. Note that this only works if your cluster isn't using security features (e.g. Kerberos). For example:

HADOOP_USER_NAME=hdfs hadoop dfs -put ...
Derek Chen-Becker
  • 901
  • 1
  • 6
  • 2
19

This may not matter to anybody, but I am using a small hack for this.

I'm exporting the HADOOP_USER_NAME in .bash_profile, so that every time I'm logging in, the user is set.

Just add the following line of code to .bash_profile:

export HADOOP_USER_NAME=<your hdfs user>
bioShark
  • 683
  • 8
  • 15
13

By default authentication and authorization is turned off in Hadoop. According to the Hadoop - The Definitive Guide (btw, nice book - would recommend to buy it)

The user identity that Hadoop uses for permissions in HDFS is determined by running the whoami command on the client system. Similarly, the group names are derived from the output of running groups.

So, you can create a new whoami command which returns the required username and put it in the PATH appropriately, so that the created whoami is found before the actual whoami which comes with Linux is found. Similarly, you can play with the groups command also.

This is a hack and won't work once the authentication and authorization has been turned on.

Praveen Sripati
  • 32,799
  • 16
  • 80
  • 117
  • Yes - read somewhere that Hadoop was initially used between a small trusted users and security was not really a concern, later as the usage grew security was added on top of Hadoop. Actually, security should be a concern from ground up in software design and not an after thought. Just my 2c. – Praveen Sripati Jul 07 '12 at 01:39
  • thanks. could you please elaborate on how I should create a new "whoami" command and put it in the path? maybe with an example. thanks – reza Jul 09 '12 at 16:34
  • create a text file file `whoami` with `echo yourname` and give it executable permissions. Add the folder of the `whoami` as the first thing to the PATH variable in .bashrc file. – Praveen Sripati Jul 10 '12 at 01:27
  • nice hack but it doesn't work. I created the whoamifile and updated my path. now when I run whoami it returns the user1. But when I try to put files into hadoop using: "hadoop dfs -put file1 /user/user1/testFolder" it throws error due to permission and specifies username as user2:( – reza Jul 10 '12 at 20:35
  • For some reason Hadoop is not picking the `whoami` which you created. Set the path properly and it should work. – Praveen Sripati Jul 11 '12 at 01:20
  • Could you please elaborate on how I should set the path properly? I've set the path through ~/.profile and when executing whoami it works as expected. Any idea on why Hadoop is not picking the whoami? – reza Jul 12 '12 at 01:13
  • post another query in SO and someone will help you – Praveen Sripati Jul 12 '12 at 01:20
  • I wonder, in this case, who the actual client calling "whoami" is. I beleive its in the hadoop Shell wrapper class. That wrapper is probably called either by the data node which is attempting to create a file or by the client itslef. – jayunit100 Apr 11 '13 at 15:19
  • I think there's a caveat with the idea that it's running 'groups' to get the group to use in the hdfs file. I'm climbing the learning curve, but here's an example: Right now my regular account does not belong to any hadoop-related groups (e.g., hdfs, hive, or hadoop). When I -put a file with myuser:mygroup owner:group into hdfs, it shows up with myuser:myuser there. Any thoughts? – Matthew Cornell Nov 18 '13 at 19:29
1

Shell/Command way:

Set HADOOP_USER_NAME variable , and execute the hdfs commands

  export HADOOP_USER_NAME=manjunath
  hdfs dfs -put <source>  <destination>

Pythonic way:

  import os 
  os.environ["HADOOP_USER_NAME"] = "manjunath"
Manju N
  • 886
  • 9
  • 14
0

There's another post with something similar to this that could provide a work around for you using streaming via ssh:

cat file.txt | ssh user1@clusternode "hadoop fs -put - /path/in/hdfs/file.txt"

See putting a remote file into hadoop without copying it to local disk for more information

Community
  • 1
  • 1
Chris White
  • 29,949
  • 4
  • 71
  • 93
  • thanx. but that is my own post too. After trying that, I noticed that not using piping is faster. In fact, copying files to one of the hadoop machines using "sep" and then using "ssh" to copy the files from local drive into hadoop turned out to be faster. I am not sure about the reason but probably it has to do with limitations in terms of the amount of available buffer. Anyway, I am trying to skip both these steps and just use "hadoop" directly from a remote machine. It works in terms of copying files but I am facing having files under two different username. – reza Jul 09 '12 at 18:19