0

I am trying to set up Hadoop Cluster with one namenode and two datanodes(slave1 and slave2) so I downloaded the zip file from the Apache Hadoop and unzipped it in the namenode and one(slave1) of the datanodes.

So I made all the configurations(formatting the namenode) in master/slave1 and successfully set up the slave1 with the master which means that I am able to submit a job and see the datanode instance in the admin UI.

So I zipped the whole hadoop installation in the slave1 and unzipped it in the slave2 and changed some property values for tmp directory and environment variables such as JAVA_HOME. I didn't touch the master URL (fs.defaultFS) in the core-site.xml.

When I try to start datanode in slave2, I am getting this error.

java.io.IOException: Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured

It is weird that I didn't specify these properties in the slave1 and am able to start datanode in slave1 without any problem, but it is throwing this error in the slave2 even though all the configurations are same.

I found these links related to this problem, but it doesn't work in my environment.

  1. javaioioexception-incorrect
  2. dfs-namenode-servicerpc-address-or-dfs-namenode-rpc-address-is-not-configured
  3. incorrect-configuration-namenode-address-dfs-namenode-rpc-address-is-not-config

I am using hadoop 2.4.1 and JDK1.7 on centos.

It would be very helpful if someone who have had this problem already figured it out and can share some information.

Thanks.

Community
  • 1
  • 1
user826323
  • 2,248
  • 6
  • 43
  • 70
  • Just try to copy all the `config` files you modified for `master` node to `slave1` and `slave2` node. Restart hadoop services. – Rajesh N Jul 15 '15 at 08:40

6 Answers6

1

These steps solved the problem for me:

  1. export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
  2. echo $HADOOP_CONF_DIR
  3. hdfs namenode -format
  4. hdfs getconf -namenodes
  5. .start-dfs.sh

Then, Hadoop can properly started.

Hamdi Charef
  • 639
  • 2
  • 12
  • 19
0

I have encountered the same problem as yours.

When I use command scp to copy configs from Master to Slaves, this command had not really replaced your files. After I removed the entire hadoop folder first and copyed again, the porblem solved.

I suggest you check the configs one more time in your Slave2.

Good luck.

xiao sun
  • 1
  • 3
0

if your hostname includes character "_" eg. "my_host" change it to "myhost" do not forget to change hostnames in core-site.xml also.

nik
  • 241
  • 2
  • 6
0

What all you need to do just config core-site.xml as:

<property>
<name>fs.defaultFS</name>
<value>hdfs://slave:9000</value>
</property>

It means configure dfs.namenode.rpc-address.


One mistake experience I've encounter. With same error massage as question describe.

(Wrong spelling of property name)

<property>
<name>fs.defautFS</name>
<value>hdfs://slave:9000</value>
</property>
Deon
  • 3
  • 1
cutd
  • 229
  • 3
  • 11
-1

I was also encountered the same problem as yours. The causes are: I configure wrong files such as hadoop-env.sh, hdfs-site.xml, core-site.xml, mapred-site.xml. These files are placed in the /usr/local/hadoop/etc/hadoop directory instead of /usr/local/hadoop/conf as previous version. In addition, there is a file like mapred-site.xml.template in the /usr/local/hadoop/etc/hadoop directory, therefore, you have to copy this file to other file like this command hduser@localhost:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

I suggest you follow this guide to re-configure your system. Install hadoop on single node . From this you can explore your problem when you install in multi-node

Trinhkhoi
  • 34
  • 2
-2
<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/data/namenode</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/data/datanode</value>
    </property>
</configuration>

Then you have to give the permissions to user to access the folders.

Blubberguy22
  • 1,344
  • 1
  • 17
  • 29
Naren
  • 1