3

I'm trying to run HBase in standalone mode following this tutorial: http://hbase.apache.org/book.html#quickstart

I get the following exception when I try to run

create 'test', 'cf'

in the HBase shell

ERROR: org.apache.hadoop.hbase.PleaseHoldException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

I've seen questions here regarding this error, but the solutions haven't worked for me.

What is perhaps more troubling, and what may be at the heart of the matter, is that when I stop HBase, I get the following message over and over in the log:

INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.200.1/192.168.200.1:54310. Already tried <n> time(s)

I don't know what server it's trying to connect to- that's not my computer's IP address- and like I said, I'm trying to run HBase in standalone mode.

I would really appreciate if someone could help me understand this log output.

My etc/hosts file:

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1       localhost
127.0.0.1       j.gloves

iconfig -a

lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
    options=3<RXCSUM,TXCSUM>
    inet6 ::1 prefixlen 128 
    inet 127.0.0.1 netmask 0xff000000 
    inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 
    nd6 options=1<PERFORMNUD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    options=10b<RXCSUM,TXCSUM,VLAN_HWTAGGING,AV>
    ether 10:9a:dd:60:de:3d 
    nd6 options=1<PERFORMNUD>
    media: autoselect (none)
    status: inactive
fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078
    lladdr 70:cd:60:ff:fe:4c:07:7a 
    nd6 options=1<PERFORMNUD>
    media: autoselect <full-duplex>
    status: inactive
en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    ether 10:9a:dd:b6:b4:7d 
    inet6 fe80::129a:ddff:feb6:b47d%en1 prefixlen 64 scopeid 0x6 
    inet 192.168.1.161 netmask 0xffffff00 broadcast 192.168.1.255
    nd6 options=1<PERFORMNUD>
    media: autoselect
    status: active
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
    ether 02:9a:dd:b6:b4:7d 
    media: autoselect
    status: inactive

hbase-site.xml

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>file:///Users/j.gloves/trynutch/hbase</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/Users/j.gloves/trynutch/zookeeper</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>false</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>localhost</value>
    </property>
</configuration>
jgloves
  • 719
  • 4
  • 14
  • Please post output of /etc/hosts and ifconfig -a – Sergei Rodionov Jun 09 '15 at 17:28
  • Look [here](http://stackoverflow.com/questions/8872807/hadoop-datanodes-cannot-find-namenode) for a similar problem. Please find the troubleshoot mechanisms methioned [here](http://wiki.apache.org/hadoop/ServerNotAvailable) – Ramzy Jun 09 '15 at 18:16
  • @SergeiRodionov, what do you need from ifconfig -a? – jgloves Jun 09 '15 at 18:27
  • @Ramzy, your second link looks promising, but I'm not sure how to troubleshoot Hadoop. I never installed it myself, so I don't know how to access its configuration files. – jgloves Jun 09 '15 at 18:54
  • @jgloves - The obvious, 192.168.200.1. There must be a reason for this IP address to appear in client log. – Sergei Rodionov Jun 09 '15 at 19:02
  • @SergeiRodionov - posted. – jgloves Jun 09 '15 at 19:08
  • The issue is clearly as mentioned [above link only](http://stackoverflow.com/questions/8872807/hadoop-datanodes-cannot-find-namenode), that slaves are not able to find the master. Verify etc/hosts files for both master and slave vm. Once if any changes are done to those files, please stop and start all daemons and verify the namenode status. – Ramzy Jun 09 '15 at 19:09
  • 1
    Is there perhaps another host with short name 'j' that resolves to '192.168.200.1'? And if the server is on a bridged network could it be that its IP address was '192.168.200.1' some time ago and then changed '192.168.1.161' (dhcp). – Sergei Rodionov Jun 09 '15 at 19:25
  • @SergeiRodionov, yes, its IP address was 192.168.200.1 and then changed, I just found out. Do you know how I can resolve this within HBase? – jgloves Jun 09 '15 at 19:30
  • @jgloves - I would specify localhost in configuration files to avoid addressing issues. You can try setting 'distributed' to false and restarting all processes. – Sergei Rodionov Jun 09 '15 at 19:36
  • @SergeiRodionov, thank you for your suggestion. I tried it and edited my post to show the changes in hbase-site.xml (was that what you meant by setting distributed to false?), but I am still having the same problem. – jgloves Jun 09 '15 at 19:59
  • I would stop everything except HDFS processes and verify that /hbase is HEALTHY. `hadoop fsck /hbase/ -openforwrite` – Sergei Rodionov Jun 09 '15 at 20:33
  • this gives me java.net.ConnectException: Connection refused – jgloves Jun 09 '15 at 20:40
  • Post the result for `jps` command executed as hadoop user. Also, change j.gloves IP in `/etc/hosts` to point to your system's IP. – Rajesh N Jun 10 '15 at 04:17
  • 8341 Jps 8295 HMaster 553 Main – jgloves Jun 10 '15 at 12:24

1 Answers1

0

Thank you to everyone who offered help in the comments.

My boss was able to fix the problem. It turned out there was an older version of Hadoop on my machine that was referencing an old IP address. Once it was removed from my path and the machine, HBase worked as expected.

jgloves
  • 719
  • 4
  • 14