I am using hadoop 1.2.1 version. due to some unknown reason, my namenode goes down and following log information was obtained
2017-07-28 15:04:47,422 INFO org.apache.hadoop.hdfs.server.common.Storage: Start loading image file /home/hpcnl/crawler/hadoop-1.2.1/tmp/dfs/name/current/fsimage
2017-07-28 15:04:47,423 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:881)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:834)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:378)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2017-07-28 15:04:47,428 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:881)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:834)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:378)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
Then I search on internet and found that you should stop cluster and run following command
hadoop namenode -format
After this when I restart cluster, data was not appeared in respective folders in HDFS. Can I recover my data? How to handle such situations in future if my namenode goes down?