3

Our cluster is running with 2 core nodes with little dfs capacity and it needs to be increased.

I added a new volume of 500GB to the core node instance and mounted it to /mnt1 and updated the hdfs-site.xml in both master and core nodes.

  <property>
    <name>dfs.datanode.dir</name>
    <value>/mnt/hdfs,/mnt/hdfs1</value>
  </property>

Then I restarted both hadoop-hdfs-namenode and hadoop-hdfs-datanode services. But the datanode is getting shutdown due to the new volume.

2018-06-19 11:25:05,484 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode (DataNode: [[[DISK]file:/mnt/hdfs/, [DISK]file:/mnt/hdfs1]] heartbeating to ip-10-60-12-232.ap-south-1.compute.internal/10.60.12.232:8020): Initialization failed for Block pool (Datanode Uuid unassigned) service to ip-10-60-12-232.ap-south-1.compute.internal/10.60.12.232:8020.
Exiting. org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 1, volumes configured: 2, volumes failed: 1, volume failures tolerated: 0

Upon searching I see that people suggested to format namenode so that blockpool id will be assigned to both volumes. How can I fix this issue?

0 Answers0