5

While executing any command in hbase shell, I am receiving the following error "ERROR: KeeperErrorCode = NoNode for /hbase/master" in hbase shell.

Started HBASE :

    HOSTCHND:hbase-2.0.0 gvm$ ./bin/start-hbase.sh
    localhost: running zookeeper, logging to /usr/local/Cellar/hbase-2.0.0/bin/../logs/hbase-gvm-zookeeper-HOSTCHND.local.out
    running master, logging to /usr/local/Cellar/hbase-2.0.0/logs/hbase-gvm-master-HOSTCHND.local.out
    : running regionserver, logging to /usr/local/Cellar/hbase-2.0.0/logs/hbase-gvm-regionserver-HOSTCHND.local.out

While Checking status in HBASE SHELL :

    hbase(main):001:0> status

    ERROR: KeeperErrorCode = NoNode for /hbase/master

    Show cluster status. Can be 'summary', 'simple', 'detailed', or 'replication'. The
    default is 'summary'. Examples:

    hbase> status
    hbase> status 'simple'
    hbase> status 'summary'
    hbase> status 'detailed'
    hbase> status 'replication'
    hbase> status 'replication', 'source'
    hbase> status 'replication', 'sink'

   Took 9.4096 seconds                                                             
   hbase(main):002:0> 

hbase-site.xml

<configuration>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
</property>
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
</property>
<property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/usr/local/Cellar/hbase-2.0.0/hbasestorage/zookeeper</value>
</property>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>localhost</value>
</property>
</configuration>

Please let me know why this error happens while executing hbase commands?

user7413163
  • 205
  • 1
  • 4
  • 15

8 Answers8

2

In my case I was receiving this "ERROR: KeeperErrorCode = NoNode for /hbase/master" because HMaster process was not running.

Check with jps command.

hdusr@hdp-master-1:$ jps
27504 Main
32755 DataNode
23316 HQuorumPeer
27957 Jps
646 SecondaryNameNode
27097 HMaster
23609 HRegionServer
1562 Master
1722 Worker
911 ResourceManager
32559 NameNode
1167 NodeManager

If you don't see HMaster process as in above list then that's the reason for ERROR: KeeperErrorCode = NoNode. in hbase shell.

In $HBASE_HOME/logs directory check for hbase-***-master.log for specific error.

In my case there were 2 reasons ,

First :

WARN  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection timed out

Which I solved by replacing 'localhost' with 'my machine's hostname' in hbase-site.xml. from this answer

Second :

    ERROR [master/spark-hdp-master-1:16000:becomeActiveMaster] master.HMaster: Failed to become active masterorg.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled.  Available:[TOKEN]

This was because the hdfs port in hbase-site.xml was different than that in core-site.xml of hadoop.

akshay naidu
  • 115
  • 4
  • 18
1

replace or add these configuration to the hbase-site.xml file in the conf folder of hbase dir, and then rerun the "hbase shell" command and then "list" command to view the tables present.

<?xml version="1.0" encoding="utf-8" ?>

<!--Keeper Error fix-->
<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://localhost:8020/hbase</value>
    </property>
        <property>
        <name>hbase.zookeeper.quorum</name>
        <value>localhost</value>
    </property>
        <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2182</value>
    </property>
            <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/var/lib/hbase/zookeeper</value>
    </property>
</configuration>
Lahray
  • 11
  • 2
0

First, make sure that the IP and host name mapping has been set up in the hosts file.

Second, modify the HBase temporary directory location. the temporary directory data will be emptied regularly. The default of the temporary directory is on the /tmp change them in hbase-site.xml

<property>
        <name>hbase.tmp.dir</name>
        <value>/hbase/tmp</value>
        <description>Temporary directory on the local filesystem.</description>
</property>

If it doesn't works . clean hbase data directory ,also clean the metadata in zookeeper restart hbase again .

what is more ,check your ntp & firewall .

HbnKing
  • 1,762
  • 1
  • 11
  • 25
  • Thanks for your reply. I created a directory /hbase/tmp and configured this property in hbase-site.xml, i cleaned both data, zookeeper directory. But still receiving the same exception. – user7413163 Jun 05 '18 at 09:26
  • @user7413163 if you need more help please show your logs ,no only the mistakes – HbnKing Jun 06 '18 at 04:54
0

I know it is not related with spark but I was getting following errors;

Caused by: java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/meta-region-server

Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/meta-region-server

Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/hbasei

Setting hbase.rootdir configuration resolved my problems while creating HBaseContext;

val config = HBaseConfiguration.create
config.set("hbase.zookeeper.quorum", "addresses")
config.setInt("hbase.zookeeper.property.clientPort", port)

config.set("hbase.rootdir","/apps/hbase/data") // adding this one resolves my problems

val hbaseContext = new HBaseContext(sc, config)

So you may try add that config into hbase-site.xml.

altayseyhan
  • 735
  • 1
  • 7
  • 18
0

was facing a similar issue: follow the below steps

People may face other issues also. Mentioning them below here"

  1. Setup zookeeper also: https://www.tutorialspoint.com/zookeeper/zookeeper_installation.htm
  2. May face permission issues so assign chmod -R 777 to /tmp/zookeeper, /usr/local/var/hbase, /usr/local/Cellar/hbase
  3. Add admin.enableServer=false in bin/zoo.cfg to avoid address bind exceptions
  4. Try to run both HBase and zookeeper without sudo.
0

If it happened suddenly when hbase has been running as expected, no clue about the error then try removing the hbase-tmp-dir (hbase.tmp.dir), removing the Metadata of zookeeper then restarting both hbase & zookeeper may help you.

0

Syncronizing time worked for me.

  1. Check the cause in less ../logs/hbase-hdoop-master-hadoop-master.log
  2. If you see a line like
Exception: Server hdw3.example.com,16020,1501570274049 has been rejected; Reported time is too far out of sync with master. Time difference of 311432ms
 > max allowed of 30000ms
 at org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:388)
 at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:262)
 at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:348)
  1. Run time update using the available tools (ntpd/chrony/timectls). On ubuntu run:
timedatectl

Should display output as shown. Wait for a minutes and run the command again.

              Local time: Mon 2022-06-13 17:48:20 U…
          Universal time: Mon 2022-06-13 17:48:20 U…
                RTC time: Sat 2022-06-11 02:09:42   
               Time zone: Etc/UTC (UTC, +0000)      
System clock synchronize… no                        
             NTP service: inactive                  
         RTC in local TZ: no      

You will get this output:

               Local time: Mon 2022-06-13 17:49:54 UTC
           Universal time: Mon 2022-06-13 17:49:54 UTC
                 RTC time: Mon 2022-06-13 17:49:19    
                Time zone: Etc/UTC (UTC, +0000)       
System clock synchronized: yes                        
              NTP service: n/a                        
          RTC in local TZ: no      
MUNGAI NJOROGE
  • 1,144
  • 1
  • 12
  • 25
0

my solution will only work if you had the hbase running properly and you got this issue intermittently

I usually observe this when I put my system under sleep without stopping hbase and hadoop services properly. to resolve this error , go to stop your zookeeper using zookeeper-installation/zkServer.sh stop, then run zkCli.sh . you will get the zookeeper shell opened, do a ls / there and check for hbase node. then delete the hbase node using deleteall /hbase . and now restart the hadoop daemons and hbase. it should work find. if not try formatting your hadoop namenode using hadoop namenode -format.

Hope this will help.