12

I am getting this error when I try and boot up a DataNode. From what I have read, the RPC paramters are only used for a HA configuration, which I am not setting up (I think).

2014-05-18 18:05:00,589 INFO  [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(572)) - DataNode metrics system shutdown complete.
2014-05-18 18:05:00,589 INFO  [main] datanode.DataNode (DataNode.java:shutdown(1313)) -     Shutdown complete.
2014-05-18 18:05:00,614 FATAL [main] datanode.DataNode (DataNode.java:secureMain(1989)) - Exception in secureMain
java.io.IOException: Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
at org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddresses(DFSUtil.java:840)
at   org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:151)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:745)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:278)

My files look like:

[root@datanode1 conf.cluster]# cat core-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>
 <name>fs.defaultFS</name>
 <value>hdfs://namenode:8020</value>
</property>

</configuration>

cat hdfs-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
 <name>dfs.datanode.data.dir</name>
 <value>/hdfs/data</value>
</property>
<property>
 <name>dfs.permissions.superusergroup</name>
 <value>hadoop</value>
</property>
</configuration>

I am using the latest CDH5 distro.

Installed Packages
Name        : hadoop-hdfs-datanode
Arch        : x86_64
Version     : 2.3.0+cdh5.0.1+567
Release     : 1.cdh5.0.1.p0.46.el6

Any helpful advice on how to get past this?

EDIT: Just use Cloudera manager.

Matas Vaitkevicius
  • 58,075
  • 31
  • 238
  • 265
aaa90210
  • 11,295
  • 13
  • 51
  • 88
  • Is [this other question](http://stackoverflow.com/questions/14531590/dfs-namenode-servicerpc-address-or-dfs-namenode-rpc-address-is-not-configured) useful? – nelsonda May 19 '14 at 21:30

14 Answers14

24

I too was facing the same issue and finally found that there was a space in fs.default.name value. truncating the space fixed the issue. The above core-site.xml doesn't seem to have space so the issue may be different from what i had. my 2 cents

QADeveloper
  • 469
  • 4
  • 6
  • Not sure why this was downvoted. Removing spaces between properties did solve my problem. I was following the digital ocean guide. – Divyanshu Das Mar 29 '16 at 16:27
  • 3
    Had the same issue, (caused by copying the code from [the tutorial](http://www.tutorialspoint.com/hadoop/hadoop_enviornment_setup.htm) without cleaning up) – Keerthi Bandara Jul 02 '16 at 14:27
  • I had a similar issue. Rather than a space, I simply had invalid information in the same place for fs.default.name. Everything worked fine after I fixed it. – Michael Galaxy Apr 24 '19 at 03:33
7

These steps solved the problem for me:

  • export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
  • echo $HADOOP_CONF_DIR
  • hdfs namenode -format
  • hdfs getconf -namenodes
  • ./start-dfs.sh
Mi-Creativity
  • 9,554
  • 10
  • 38
  • 47
Hamdi Charef
  • 639
  • 2
  • 12
  • 19
1

check the core-site.xml under $HADOOP_INSTALL/etc/hadoop dir. Verify that the property fs.default.name is configured correctly

OPMendeavor
  • 430
  • 6
  • 14
1

Obviously,your core-site.xml has configure error.

<property>
 <name>fs.defaultFS</name>
 <value>hdfs://namenode:8020</value>
</property>

Your <name>fs.defaultFS</name> setting as <value>hdfs://namenode:8020</value>,but your machine hostname is datanode1.So you just need change namenode to datanode1 will be OK.

cutd
  • 229
  • 3
  • 11
0

I had the exact same issue. I found a resolution by checking the environment on the Data Node:

$ sudo update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.my_cluster 50
$ sudo update-alternatives --set hadoop-conf /etc/hadoop/conf.my_cluster

Make sure that the alternatives are set correctly on the Data Nodes.

Manjunath Ballur
  • 6,287
  • 3
  • 37
  • 48
0

in my case, I have wrongly set HADOOP_CONF_DIR to an other Hadoop installation.

Add to hadoop-env.sh:

export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop/
ParisaN
  • 1,816
  • 2
  • 23
  • 55
Yousef Irman
  • 109
  • 4
0

Configuring the full host name in core-site.xml, masters and slaves solved the issue for me.

Old: node1 (failed)

New: node1.krish.com (Succeed)

Devas
  • 1,544
  • 4
  • 23
  • 28
0

creating dfs.name.dir and dfs.data.dir directories and configuring full hostname in core-site.xml, masters & slaves is solved my issue

0

In my situation, I fixed by change /etc/hosts config to lower case.

LoranceChen
  • 2,453
  • 2
  • 22
  • 48
0

This type of problem mainly arises is there is a space in the value or name of the property in any one of the following files- core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml

just make sure you did not put any spaces or (changed the line) in between the opening and closing name and value tags.

Code:

 <property> 
<name>dfs.name.dir</name> <value>file:///home/hadoop/hadoop_tmp/hdfs/namenode</value> 
<final>true</final> 
</property>
DeshDeep Singh
  • 1,817
  • 2
  • 23
  • 43
ranubwj
  • 1
  • 1
0

I was facing the same issue, formatting HDFS solved my issue. Don't format HDFS if you have important meta data.
Command for formatting HDFS: hdfs namenode -format

When namenode was not working
When namenode was not working

After formatting HDFS After formatting HDFS

James Wong
  • 4,529
  • 4
  • 48
  • 65
0

Check your '/etc/hosts' file:
There must be a line like below: (if not, so add that)
namenode 127.0.0.1
Replace 127.0.01 with your namenode IP.

0

Add the below line in hadoop-env.cmd

set HADOOP_HOME_WARN_SUPPRESS=1
0

I am aware, lately answering this.

Should check following to fix :

  1. Master is able to ping slave by name and vice-versa; If you have used host names in configuration instead of ip address. if not able to ping should review /etc/hosts on both master and slave.. add entries of all nodes in all nodes.
  2. After all configuration changes are done on master, execute following on master. * change <USER_> and <SLAVE_NODE> to actual values.

scp $HADOOP_HOME/etc/hadoop/* <USER_>@<SLAVE_NODE>:$HADOOP_HOME/etc/hadoop/

Rohit Verma
  • 457
  • 2
  • 13