1

I am trying to save file in Hadoop with python 2.7. I searched on the internet. I got some code to save a file in Hadoop but it is taking the entire folder while saving (total files in the folder are saving in Hadoop). But I need to save a specific file.

Here is the link to save a folder in Hadoop: http://www.hadoopy.com/en/latest/tutorial.html#putting-data-on-hdfs

Now what I need is save a particular file in Hadoop like abc.txt.

Here is my code:

import hadoopy
hdfs_path = 'hdfs://192.168.x.xxx:xxxx/video/py5'
def main():
   local_path = open('abc.txt').read()
   hadoopy.writetb(hdfs_path, local_path)


if __name__ == '__main__':
    main()

Here i am getting need more than one value to unpack

Any help would be appreciated.

Eric O. Lebigot
  • 91,433
  • 48
  • 218
  • 260
Mulagala
  • 8,231
  • 11
  • 29
  • 48

2 Answers2

1

The hadoopy.writetb seems to expects an iterable of two-values as its second argument. Try:

hadoopy.writetb(hdfs_path, [("abc.txt", open("abc.txt").read())])
supakeen
  • 2,876
  • 19
  • 19
0

http://www.hadoopy.com/en/latest/api.html?highlight=hadoopy.writetb#hadoopy.writetb

writedb requires second arg as kvs – Iterator of (key, value)

As per the link you have given, you have forgot to copy the function read_local_dir in your code.

GodMan
  • 2,561
  • 2
  • 24
  • 40