37

I am writing a shell script to put data into hadoop as soon as they are generated. I can ssh to my master node, copy the files to a folder over there and then put them into hadoop. I am looking for a shell command to get rid of copying the file to the local disk on master node. to better explain what I need, here below you can find what I have so far:

1) copy the file to the master node's local disk:

scp test.txt username@masternode:/folderName/

I have already setup SSH connection using keys. So no password is needed to do this.

2) I can use ssh to remotely execute the hadoop put command:

ssh username@masternode "hadoop dfs -put /folderName/test.txt hadoopFolderName/"

what I am looking for is how to pipe/combine these two steps into one and skip the local copy of the file on masterNode's local disk.

thanks

In other words, I want to pipe several command in a way that I can

reza
  • 1,188
  • 3
  • 17
  • 32
  • Piping problem is solved. However, the performance of piping is much slower than copying files first to the local disk of the master node and then copying them to Hadoop. Any idea? – reza Jul 02 '12 at 17:06

5 Answers5

42

Try this (untested):

cat test.txt | ssh username@masternode "hadoop dfs -put - hadoopFoldername/test.txt"

I've used similar tricks to copy directories around:

tar cf - . | ssh remote "(cd /destination && tar xvf -)"

This sends the output of local-tar into the input of remote-tar.

sarnold
  • 102,305
  • 22
  • 181
  • 238
  • nope this doesn't work. for 2 reasons: 1) hadoop dfs -put /dev/stdin doesn't exist 2) my files are binary format. in fact it is test.bin rather than test.txt – reza Jun 30 '12 at 00:59
  • 2
    Binary wouldn't matter -- `ssh` doesn't mangle 8-bit contents. Try `-` in place of `/dev/stdin`? – sarnold Jun 30 '12 at 01:01
  • so in that case, the only problem is that hadoop dfs -put /dev/stdin doesn't work. In fact, I just tried and it fail:( – reza Jun 30 '12 at 01:02
  • it says: put: /dev/stdin ( No such device or address) – reza Jun 30 '12 at 01:02
  • 1
    How about `-` in place of `/dev/stdin`? – sarnold Jun 30 '12 at 01:03
  • (and what kind of horrible system doesn't have `/dev/stdin`?) – sarnold Jun 30 '12 at 01:04
  • 3
    oh great. using - instead of /dev/stdin solved the problem. So I am using the following code and it works fine: cat test.txt | ssh username@masternode "hadoop dfs -put - hadoopFolderName/test.txt" – reza Jul 02 '12 at 15:47
  • Piping problem is solved. However, the performance of piping is much slower than copying files first to the local disk of the master node and then copying them to Hadoop. Any idea? – reza Jul 02 '12 at 18:51
  • Which is slower? The entire operation or the specific `put`? – sarnold Jul 02 '12 at 22:09
  • 2
    the specific put. copying a single file to master node's local drive and then putting it into hadoop using ssh remote is faster than piping the cat | ssh remote. – reza Jul 02 '12 at 23:51
  • There is a nice solution here : [http://one-line-it.blogspot.dk/2013/05/hadoop-copy-directly-to-hdfs-from.html] – serup Feb 23 '16 at 10:27
  • 1
    is there is a size limit for transferring files using this approach? – amith murakonda May 23 '19 at 15:13
  • 1
    @amithmurakonda, I do not know if hadoop has an input limit in this fashion. ssh certainly doesn't, but the longer the ssh connection is held open, the more likely it is the connection may be dropped due to errors. Many of us have ssh connections or irc connections open for months, but at some point a disruption of a stateful firewall may cause the whole thing to fail. `rsync` would know how to resume such a thing, if both source and destination are files or directory trees. You may get better results asking a new question, though, with the details of your problem. Thanks. – sarnold May 23 '19 at 18:45
  • 2
    This solution worked with little modification, just add filename in hdfs path : cat test.txt | ssh username@masternode "hdfs dfs -put - hadoopFoldername/test.txt" – DollyShukla Feb 04 '20 at 05:12
10

The node where you have generated the data on, is this able to reach each of your cluster nodes (the name node and all the datanodes).

If you do have data connectivity then you can just execute the hadoop fs -put command from the machine where the data is generated (assuming you have the hadoop binaries installed there too):

#> hadoop fs -fs masternode:8020 -put test.bin hadoopFolderName/
Chris White
  • 29,949
  • 4
  • 71
  • 93
3

Hadoop provides a couple of REST interfaces. Check Hoop and WebHDFS. You should be able to copy the file without copying the file to the master using them from non-Hadoop environments.

Praveen Sripati
  • 32,799
  • 16
  • 80
  • 117
  • this should work: https://hadoop.apache.org/docs/r1.0.4/webhdfs.html#File+and+Directory+Operations – Babu Jul 04 '16 at 05:17
1

Create pipe and then using pipe do the transfer. In this way file is not stored locally.

mkfifo transfer_pipe

scp remote_file transfer_pipe| hdfs dfs -put transfer_pipe <hdfs_path>
m4n0
  • 29,823
  • 27
  • 76
  • 89
Prashant
  • 11
  • 3
0

(untested)

Since the node where you create your data has access to internet, then perhaps you could install hadoop client node software, then add it to the cluster - after normal hadoop fs -put, then disconnect and remove your temporary node - the hadoop system should then automatically make replication of your files blocks inside your hadoop cluster

serup
  • 3,676
  • 2
  • 30
  • 34