I have followed a tutorial to setup Apache Hadoop for Windows, which can be found here. I am now having an issue with the Datanode, Resource Manager, and Yarn cmd windows showing that all 3 shutdown seconds after opening, with only the Namenode continuing to run. Here is the process I have tried so far:
- run CMD as admin
- use command
start-all.cmd
(this opens the Namenode, Datanode, Yarn, and Resourcemanager cmd windows) - Datanode, Yarn, and Resource manager all give shutdown messages almost immediately after they start
SHUTDOWN_MSG: Shutting down ResourceManager at thood-alienware/...
SHUTDOWN_MSG: Shutting down NodeManager at thood-alienware/...
SHUTDOWN_MSG: Shutting down DataNode at thood-alienware/...
- Interestingly enough, only the Datanode window gives an error as a reason for shutting down:
2019-03-26 00:07:03,382 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
I know that I can edit the amount of tolerated failures, but I'd like to actually fix whatever is causing this disk failure. When I open the datanode directory, it's an empty folder, however my namenode directory has files present within that were created from the `start-all.cmd'. Has anyone worked with Hadoop on Windows before? I'm totally at a loss for where to go from here because most online help is for Linux systems.