4

I have followed a tutorial to setup Apache Hadoop for Windows, which can be found here. I am now having an issue with the Datanode, Resource Manager, and Yarn cmd windows showing that all 3 shutdown seconds after opening, with only the Namenode continuing to run. Here is the process I have tried so far:

  1. run CMD as admin
  2. use command start-all.cmd (this opens the Namenode, Datanode, Yarn, and Resourcemanager cmd windows)
  3. Datanode, Yarn, and Resource manager all give shutdown messages almost immediately after they start

SHUTDOWN_MSG: Shutting down ResourceManager at thood-alienware/...

SHUTDOWN_MSG: Shutting down NodeManager at thood-alienware/...

SHUTDOWN_MSG: Shutting down DataNode at thood-alienware/...

  1. Interestingly enough, only the Datanode window gives an error as a reason for shutting down:

2019-03-26 00:07:03,382 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0

I know that I can edit the amount of tolerated failures, but I'd like to actually fix whatever is causing this disk failure. When I open the datanode directory, it's an empty folder, however my namenode directory has files present within that were created from the `start-all.cmd'. Has anyone worked with Hadoop on Windows before? I'm totally at a loss for where to go from here because most online help is for Linux systems.

Tom Hood
  • 497
  • 7
  • 16

1 Answers1

1

Did you get the following bin files in HADOOP directory

https://github.com/s911415/apache-hadoop-3.1.0-winutils

Devarshi Mandal
  • 703
  • 8
  • 16