1

I am using Ubuntu 10.04. I installed hadoop in my local directory as a standalone one.

~-desktop:~$ hadoop/bin/hadoop version
Hadoop 1.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
This command was run using /home/circar/hadoop/hadoop-core-1.2.0.jar

conf/core-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/circar/hadoop/dataFiles</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

</configuration>

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

</configuration>

hdfs-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>

</configuration>

I've formatd the namenode twice

~-desktop:~$ hadoop/bin/hadoop namenode -format

Then I start hadoop with

~-desktop:~$ hadoop/bin/start-all.sh

Showing Result as:

starting namenode, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-namenode-circar-desktop.out
circar@localhost's password: 
localhost: starting datanode, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-datanode-circar-desktop.out
 circar@localhost's password: 
 localhost: starting secondarynamenode, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-secondarynamenode-circar-desktop.out
 starting jobtracker, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-jobtracker-circar-desktop.out
 circar@localhost's password: 
 localhost: starting tasktracker, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-tasktracker-circar-desktop.out

But in /logs/hadoop-circar-datanode-circar-desktop.log

It is showing error as:

2013-06-24 17:32:47,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = circar-desktop/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.0
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May  6 06:59:37 UTC 2013
STARTUP_MSG:   java = 1.6.0_26
************************************************************/
2013-06-24 17:32:47,315 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-06-24 17:32:47,324 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-06-24 17:32:47,325 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-06-24 17:32:47,325 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-06-24 17:32:47,447 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-06-24 17:32:47,450 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-06-24 17:32:49,265 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/circar/hadoop/dataFiles/dfs/data: namenode namespaceID = 186782509; datanode namespaceID = 1733977738
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:412)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:319)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1698)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1637)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1655)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1781)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1798)

2013-06-24 17:32:49,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at circar-desktop/127.0.1.1
************************************************************/

JPS Showing:

 ~-desktop:~$ jps
 8084 Jps
 7458 JobTracker
 7369 SecondaryNameNode
 7642 TaskTracker
 6971 NameNode

When I'm trying to Stop it , it shows:

~-desktop:~$ hadoop/bin/stop-all.sh 
stopping jobtracker
circar@localhost's password: 
localhost: stopping tasktracker
stopping namenode
circar@localhost's password: 
localhost: *no datanode to stop*
circar@localhost's password: 
localhost: stopping secondarynamenode

What wrong am i doing? Anyone help me?

  • Similar to http://stackoverflow.com/questions/3425688/why-does-the-hadoop-incompatible-namespaceids-issue-happen – Tariq Jun 24 '13 at 14:51
  • http://stackoverflow.com/questions/13062636/datanode-failing-in-hadoop-on-single-machine – Tariq Jun 24 '13 at 14:52
  • http://stackoverflow.com/questions/10097246/no-data-nodes-are-started – Tariq Jun 24 '13 at 14:52

2 Answers2

2

Ravi is correct. But also make sure that your cluster_id in both ${dfs.data.dir}/current/VERSION and ${dfs.name.dir}/current/VERSION matches. If not change the data node's cluster_id to have the same as namenode. After making the changes follow the steps Ravi mentioned.

1

Namenode generates new namespaceID every time you format HDFS. Datanodes bind themselves to namenode through namespaceID.

Follow the below steps to fix the problem

a) Stop the problematic DataNode. b) Edit the value of namespaceID in ${dfs.data.dir}/current/VERSION to match the corresponding value of the current NameNode in ${dfs.name.dir}/current/VERSION. c) Restart the DataNode.

Magham Ravi
  • 603
  • 4
  • 8