0

Up until a few days ago I had a working Hadoop cluster. I copied over a very large data set and maxed out all of my datanodes. Running any sort of HDFS command gives me:

>hdfs dfs -ls
>ls: Call From hadoop-n2/XXX.XXX.XX.XX to hadoop-n2:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

So I can't go in a delete the files that are taking up space. I tried wiping HDFS clearn as described here but I don't see any tmp property in hdfs-site.xml. I tried reducing the replication factor from 3 to 1 but that didn't seem to change anything.

I'm running Hadoop v2.6.0 with Cloudera CDH v5.6.0.

Community
  • 1
  • 1
Sal
  • 1,653
  • 6
  • 23
  • 36

0 Answers0