0

I'm having a problem about hive server that I don't understand. I've just set up a hadoop cluster and want to access to it from a hive service. First try I did was running the hive server in one of the cluster machines.

Everything worked nicely but I wanted to move the hive service to another machine outside the hadoop cluster.

So I just started a new machine outside this hadoop cluster. I've just install hive (+ hadoop libraries) and copied the hadoop config from the cluster. When I run the hiveserver almost everything goes ok. I can connect with the hive cli from a different machine to my hiveserver, create new tables in the hive warehouse within the hdfs filesystem in the hadoop cluster, query then and so on.

The thing I don't understand is that hiveserver seems to not recognize old tables which were created in my first try.

Some notes about my config are that all tables are handled by Hive and stored in HDFS. Hive configuration is the default one. I suppose that it has to do with my hive metastore but it couldn't say what.

Thank you!!

Ivan Fernandez
  • 4,173
  • 5
  • 25
  • 30
  • Recently, I ran into a similar problem where I was not able to access the Hive tables created using `Shark` from a `Scala` prompt. I was able to fix it by changing the Metastore to MySQL. See this - http://stackoverflow.com/questions/23565853/accessing-shark-tables-hive-from-scala-shark-shell – visakh May 16 '14 at 06:36
  • Thanks visakh, I was wondering if there's some way to regenerate a new metastore from hdfs data without saving metadata previously (as in a relational db) – Ivan Fernandez May 17 '14 at 21:07

0 Answers0