5

I have updated the /conf/slaves file on the Hadoop master node with the hostnames of my slave nodes, but I'm not able to start the slaves from the master. I have to individually start the slaves, and then my 5-node cluster is up and running. How can I start the whole cluster with a single command from the master node?

Also, SecondaryNameNode is running on all the slaves. Is that a problem? If so, how can I remove them from the slaves? I think there should only be one SecondaryNameNode in a cluster with one NameNode, am I right?

Thank you!

ingmid
  • 61
  • 3

1 Answers1

11

In Apache Hadoop 3.0 use $HADOOP_HOME/etc/hadoop/workers file to add slave nodes one per line.

Ashkan Kazemi
  • 1,077
  • 8
  • 26
unwelcomed_user
  • 340
  • 3
  • 15
  • I have added salve server ip in $HADOOP_HOME/etc/hadoop/workers on master server,when I start-all.sh on the master,it start SecondaryNameNode, ResourceManager,NameNode. But on slave server,it only start NodeManager without DataNode. – Venus Jul 02 '21 at 03:26