1

I have installed Hadoop 2.2 on my laptop running ubuntu as single node cluster and run the word count example. After that I installed Hive and Hadoop started to give error i.e.

hdfs dfs -ls throws IOException : localhost is "utbuntu/127.0.1.1 and destination host is localhost:9000"

I found the below two entries in my hosts file

127.0.0.1 localhost
127.0.1.1 ubuntu
#and some IPv6 entries...

My question is why it is giving error after configuring hive and what is the solution? Any help is really appreciated.

Thanks!

Lorenz Meyer
  • 19,166
  • 22
  • 75
  • 121
asi24
  • 13
  • 1
  • 4
  • Try commenting out the second entry in your /etc/hosts file (the 12.0.1.1), and try again (if that failes, restart your hdfs services and try once more) – Chris White Nov 23 '13 at 14:38
  • Hi, Many thanks! I commented the second entry but still the error is same.. I think there is something associated with the Hive installation. I have found something related [here](http://ria101.wordpress.com/2010/01/28/setup-hbase-in-pseudo-distributed-mode-and-connect-java-client/) but still couldn't able to get rid out of this error. – asi24 Nov 23 '13 at 22:12

2 Answers2

0

There seems to be a typo 'utbuntu' in your original IOException. Can you please check it that's the right hostname or a copy-paste error?

The etc/hosts configs took a bit of trial and error to figure out for a Hadoop 2.2.0 cluster setup but what I did was remove all 127.0.1.1 assignments to the hostname and assigned the actual IP to the machine name and it works. e.g.

192.168.1.101 ubuntu

I have a 2-node cluster so my /etc/hosts for master (NameNode) looks like:

127.0.0.1   localhost
#127.0.1.1  myhostname
192.168.1.100   myhostname
192.168.1.100   master

And /usr/local/hadoop/etc/hadoop/core-site.xml has the following:

<property>
   <name>fs.default.name</name>
   <value>hdfs://master:9000</value>
 </property>

The main thing to note is that I've commented out the myhostname to 127.0.1.1 association.

Vishal
  • 1,253
  • 1
  • 11
  • 17
  • Hi, yes it was typo. what was your core-site.xml entries? i mean on which ip hadoop was configured to listen (ubuntu) or localhost i.e. fs.default.name hdfs://localhost:9000 , that's the entry on my machine – asi24 Nov 24 '13 at 00:48
  • I've updated my answer with details. I have a 2 node cluster so I have in /usr/local/hadoop/etc/hadoop/core-site.xml: fs.default.name hdfs://master:9000 – Vishal Nov 24 '13 at 02:24
  • Hi thanks.. I able to solve the issue by generating new public and private key pair.. I don't know how it's working behind.. But for replication of issue 1) make new user for hadoop in linux and configure the Hadoop wich includes key pair generation and ssh ping.. 2) logout and login as root and change the password of the hadoop user 3) login to hadoop account with new password and generate the key pair again.. the issue will show up 4) change and password to original password and generate the key pair will remove the issue.. – asi24 Nov 27 '13 at 22:24
0

I also had this issue, because my machine had start php-fpm with port 9000, so I kill php-fpm , then restart is ok.

Eric Aya
  • 69,473
  • 35
  • 181
  • 253
Bo Cheng
  • 11
  • 2