0

Basically, when I start up hadoop with the ./start-all.sh command I run into some problems.

I have looked at Hadoop cluster setup - java.net.ConnectException: Connection refused

and There are 0 datanode(s) running and no node(s) are excluded in this operation

When I run ./start-all.sh, I get

  WARNING: Attempting to start all Apache Hadoop daemons as snagaraj in 10 
  seconds.
  WARNING: This is not a recommended production deployment configuration.
  WARNING: Use CTRL-C to abort.
  Starting namenodes on [localhost]
  pdsh@greg: localhost: ssh exited with exit code 1
  Starting datanodes
  Starting secondary namenodes [greg.bcmdc.bcm.edu]
  Starting resourcemanager
  Starting nodemanagers

When I run a python script that uses Hadoop/hdfs I get the error

  org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
  /user/data/...._COPYING_ could only be written to 0 of the 
  1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) 
  are excluded in this operation.

I have tried reformatting the namenode with hdfs namenode -format, but that does not help.

The configurations on my xml files seem to be right and my path to JAVA_HOME is correct. I am happy to provide information as needed.

Superwiz1
  • 81
  • 2
  • 10

1 Answers1

0

Execute: ssh localhost in your masternode or namenode server.

If it will able to connect then your above issue will get resolve.

MadProgrammer
  • 513
  • 5
  • 18