1

I installed hadoop on debian, which was working fine. (not in VM) Then I restarted computer, which started showing problem. There error I get -

hadoop fs -ls /user/hduser

which returned error like this -

16/06/15 10:48:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Call From localhost/127.0.0.1 to mylocalcomp:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

On the other hand, jps shows things are running:

6022 SecondaryNameNode
5840 DataNode
7290 Jps
6413 NodeManager
6309 ResourceManager

My core-site.xml config

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://mylocalcomp:9000</value>
</property>
</configuration>

My /etc/hosts has this -

127.0.0.1       localhost mylocalcomp

My Hadoop version is 2.7.2, if that helps. Tried suggestions in couple of similar questions, not working, and I'm kind of confused now.

In my .profile file -

HADOOP_PREFIX=/usr/local/hadoop
JAVA_HOME=/usr/local/java
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin
export HADOOP_PREFIX
export JAVA_HOME
export PATH
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"

EDIT: I start hadoop with start-dfs.sh and start-yarn.sh. And started name node with /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode. Name node appears in jps only for few minutes, and disappears.

EDIT 2: I guess the problem is with namenode configuration in hdfs-site.xml (or lack of it)...

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

Somewhat related to this one (?) - Namenode not getting started

I not sure what namenode configuration to use for my case or where...

EDIT 3: Log file: /usr/local/hadoop/logs/hadoop-hduser-namenode-mylocalcomp.log

Namenode log:
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = mylocalcomp/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.2
Community
  • 1
  • 1

1 Answers1

1

9000 is one default RPC port of NameNode. You didn't start your namenode yet. Try

sbin/hadoop-daemon.sh start namenode

I suspect that the namenode was once tried to start when your computer restarted, and it failed. So, if the command above doesn't work, take a look at the namenode's log.

waltersu
  • 1,191
  • 8
  • 20
  • Thanks! started namenode as suggested. Interestingly, namenode appears in jps only for few minutes, and disappears... And I'm getting the same error again.. :( – tired and bored dev Jun 15 '16 at 06:05
  • Post the namenode log. Usually the log present in /var/log/hadoop – Paul Jun 15 '16 at 06:30
  • It was actually issue with namenode issue. I formatted namenode, and it again started working... I guess something is happening when reboot happens that is affecting namenode... I guess, i might get same error if I reboot.. – tired and bored dev Jun 15 '16 at 07:46
  • @user1478061 could you post the log please? – waltersu Jun 15 '16 at 08:47
  • @waltersu, Thanks! updated question with log file. Hope that's what you expected. Just a thought - is it related to ip? (127.0.1.1)... – tired and bored dev Jun 15 '16 at 09:03
  • 1
    The ip is fine. The log snippet your posted is useless. My pure guess is you data dir is set to under /tmp, and your computer clean /tmp directory everytime it restarts. So please try config *dfs.datanode.data.dir* and *dfs.namenode.name.dir* to some place other than /tmp – waltersu Jun 15 '16 at 09:29