14

I installed Hadoop2.2.0 and Hbase0.98.0 and here is what I do :

$ ./bin/start-hbase.sh 

$ ./bin/hbase shell

2.0.0-p353 :001 > list

then I got this:

ERROR: Can't get master address from ZooKeeper; znode data == null

Why am I getting this error ? Another question: do I need to run ./sbin/start-dfs.sh and ./sbin/start-yarn.sh before I run base ?

Also, what are used ./sbin/start-dfs.sh and ./sbin/start-yarn.sh for ?

Here is some of my conf doc :

hbase-sites.xml

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://127.0.0.1:9000/hbase</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>

    <property>
        <name>hbase.tmp.dir</name>
        <value>/Users/apple/Documents/tools/hbase-tmpdir/hbase-data</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>localhost</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/Users/apple/Documents/tools/hbase-zookeeper/zookeeper</value>
    </property>
</configuration>

core-sites.xml

<configuration>

  <property>
      <name>fs.defaultFS</name>
      <value>hdfs://localhost:9000</value>
      <description>The name of the default file system.</description>
  </property>

  <property>
      <name>hadoop.tmp.dir</name>
      <value>/Users/micmiu/tmp/hadoop</value>
      <description>A base for other temporary directories.</description>
  </property>

  <property>
      <name>io.native.lib.available</name>
      <value>false</value>
  </property>

</configuration>

yarn-sites.xml

<configuration>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

</configuration>
merours
  • 4,076
  • 7
  • 37
  • 69
Rickie Lau
  • 291
  • 2
  • 8
  • 15

8 Answers8

13

If you just want to run HBase without going into Zookeeper management for standalone HBase, then remove all the property blocks from hbase-site.xml except the property block named hbase.rootdir.

Now run /bin/start-hbase.sh. HBase comes with its own Zookeeper, which gets started when you run /bin/start-hbase.sh, which will suffice if you are trying to get around things for the first time. Later you can put distributed mode configurations for Zookeeper.

You only need to run /sbin/start-dfs.sh for running HBase since the value of hbase.rootdir is set to hdfs://127.0.0.1:9000/hbase in your hbase-site.xml. If you change it to some location on local the filesystem using file:///some_location_on_local_filesystem, then you don't even need to run /sbin/start-dfs.sh.

hdfs://127.0.0.1:9000/hbase says it's a place on HDFS and /sbin/start-dfs.sh starts namenode and datanode which provides underlying API to access the HDFS file system. For knowing about Yarn, please look at http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/YARN.html.

slm
  • 15,396
  • 12
  • 109
  • 124
Chandra kant
  • 1,554
  • 1
  • 11
  • 14
  • thanks for your patient answer which is really helpful to me, and I was also wondering if I want to connect Hbase with java api and do some 'CRUD' operation in java code locally, can standalone mode satisfy my requirement? How could I do that? I have already have some code here but just don't know how to configure. I google it but seems no example online. – Rickie Lau Mar 27 '14 at 01:54
  • Can you put your code here so that I can see if there is anything wrong? Or better make a new question ,where i could give you code feedback. – Chandra kant Mar 27 '14 at 02:45
  • thanks again, and here is the new question where I attach my code : http://stackoverflow.com/questions/22680433/fail-to-connect-to-hbase-with-java-api – Rickie Lau Mar 27 '14 at 07:01
7

This could also happen if the vm or the host machine is put to sleep ,Zookeeper will not stay live. Restarting the VM should solve the problem.

Gru
  • 817
  • 13
  • 20
  • 1
    Thanks, it was facing this issue because the machine was put to sleep. A simple vagrant halt; vagrant up solved the problem :) – Bunny Rabbit Dec 28 '16 at 04:08
  • Is there any alternative to restarting machine ? In my case its the docker and i need to stop the instance do "docker rm" and recreate image everytime my host goes to sleep – santosh Aug 12 '22 at 08:17
4

You need to start zookeeper and then run Hbase-shell

{HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper

and you may want to check this property in hbase-env.sh

# Tell HBase whether it should manage its own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false

Refer to Source - Zookeeper

Community
  • 1
  • 1
Ronak Patel
  • 3,819
  • 1
  • 16
  • 29
3

One quick solution could be to Restart hbase:

1) Stop-hbase.sh
2) Start-hbase.sh
Reagan Ochora
  • 1,642
  • 1
  • 19
  • 22
1

I had the exact same error. The Linux firewall was blocking connectivity. One can test ports via telnet. A quick fix is to turn off the firewall and see if it fixes it:

Completely disable the firewall on all of your nodes. Note: this command will not survive a reboot of your machines.

systemctl stop firewalld

Long term fix is that you must configure the firewall to allow the hbase ports.

Note, your version of hbase may use different ports: https://issues.apache.org/jira/browse/HBASE-10123

Chris
  • 1,219
  • 2
  • 11
  • 21
0

The output from Hbase shell is quite high level that many misconfiguration would cause this message. To help yourself debug, it would be much better to look into the hbase log in

/var/log/hbase 

to figure out the root cause of the issue.

I had the same problem too. For me, my root cause was due to hadoop-kms having a conflicting port number with my hbase-master. Both of them are using port 16000 so my HMaster didn't even get started when I invoke hbase shell. After I fixed that, my hbase worked.

Again, kms port conflict might not be your root-cause. Strongly suggest looking into /var/log/hbase to find the root cause.

YufengJ
  • 101
  • 1
  • 5
0

In my case with same error in running hbase - I did not include the zookeeper properties in the hbase-site.xml and still get the above error messages (as based in Apache hbase guide, only the two properites: rootdir, and distributed are essential).

I can also trace back my output of jps command that find out that indeed my Hregion server and Hmaster were not properly up and running.

After stop and start (like a reset), I did have these two up and running and can run hbase properly.

r poon
  • 633
  • 7
  • 7
0

if it's happening in VMWare or virtual box please restart Cloudera by command init1 please check you have root privilege and retry hope it will help :)

hbase shell

Shekh Firoz Alam
  • 192
  • 3
  • 15