0

I'm trying to setup a small Hadoop cluster on AWS. I've been able to get to the point where I start the cluster but it keeps failing on the secondary namenode, which is supposed to be running on the primary namenode. I can access the datanodes via the web browser on hostname:50075 but The namenode cannot be accessed via hostname:50070.

Here are the two errors I spotted in the .log and .out file.

.log

enter image description here

and here is the .out

enter image description here

The guide I'm following has us setup our secondary on our primary namenode.

I've tried a few things including changing the formatting in the /etc/host file. I think it has to do with one of these config files.

/etc/hosts

enter image description here

I've put the public ip followed by the private ip provided by AWS.

/hadoop/etc/hadoop/masters

enter image description here

This is the masters file which tells the cluster where the secondary name node is.

Here is how it looks when I run the start-dfs.sh script. I'm not sure if the namenode should have 0.0.0.0 like it shows.

enter image description here

Any help is appreciated as I don't know what else to check.

franchyze923
  • 1,060
  • 2
  • 12
  • 38
  • Why not consider the AWS EMR service? It will bring up the instances and cluster for you, without much manual configurations. – abiydv Feb 06 '18 at 17:44
  • I'm trying to learn how to stand up a cluster from scratch – franchyze923 Feb 06 '18 at 17:45
  • Though I am not too familiar with Hadoop, but from bind exception it seems there might be something already running on that port. What does netstat say about this port? – abiydv Feb 06 '18 at 17:50
  • Thanks for the help. I ran `sudo netstat -tulnp | grep 9000` and got no output. Does this mean that port is not open on my server? How can I fix that? – franchyze923 Feb 06 '18 at 17:57
  • That would mean nothing is listening on that port. Did you try specifying any other port instead of 9000, maybe 9001/2 does that also give this same error? – abiydv Feb 06 '18 at 18:07
  • Yes, I tried changing to 9001 earlier today. Same result. So something with the Hadoop config is incorrect? The Hadoop config needs to tell the server to listen right? – franchyze923 Feb 06 '18 at 18:10
  • See if this helps - https://stackoverflow.com/questions/30012822/cannot-assign-requested-address – abiydv Feb 06 '18 at 18:16

0 Answers0