0

I have been struggling with this for quite a lot of time.

step-1

I create a hive table and loaded data as follows:-

create external table if not exists productstorehtable2
(
device  string,
date  string,
word  string,
count  int
)
row format delimited fields terminated by ','
location 'hdfs://quickstart.cloudera:8020/user/cloudera/hadoop/hive/warehouse/VerizonProduct2'; 

LOAD DATA INPATH 'hdfs://quickstart.cloudera:8020/user/cloudera/hadoop/input/productstore' INTO TABLE productstorehtable2;

step-2: I write a simple spark script to check for sanity

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext, Time}
import org.apache.spark.storage.StorageLevel
import org.apache.spark.sql.SQLContext
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark
import org.apache.spark.sql.hive._
import org.apache.log4j.{Level, Logger}


import java.util.regex.Pattern
import java.util.regex.Matcher

//import Utilities._


object HivePortStreamer {


  def readFromHiveTable(hivecontext:org.apache.spark.sql.hive.HiveContext)  =
  {

     import hivecontext.implicits._

     //val productDF=hivecontext.sql("select * from productstorehtable2")
     //println(productDF.show())
       println("PRINTING THE HIVE TABLES")
       println(hivecontext.sql("show tables"))
  }




  def main(args: Array[String]) {

    val rootLogger = Logger.getRootLogger()
    rootLogger.setLevel(Level.ERROR)

    // Create the context with a 1 second batch size
    val conf = new SparkConf().setAppName("hivePortStreamer").setMaster("local[*]")
    .set("spark.sql.warehouse.dir", "hdfs://quickstart.cloudera:8020/user/cloudera/hadoop/hive/warehouse/VerizonProduct2")
    //val ssc = new StreamingContext(conf, Seconds(1))
    val sparkcontext=new SparkContext(conf)
    val hivecontext=new org.apache.spark.sql.hive.HiveContext(sparkcontext)  
    readFromHiveTable(hivecontext)    

    sparkcontext.stop()

  }
}

When I try to run this script then it just displays blank. I don't get it; I have given the correct warehouse directory location. The same is the case with 'show databases' command.

Is it some issue with how spark and hive are configured on my system?

I use sbt. I had tried the same code on spark-shell and got the same output.

Edit 1: Spark is unable to discover hive tables. I tried the command

println(hivecontext.sql("create table dummytable(id int)"))

it creates the hive table as expected

Kindly help.

Thanks

background: CentOS, cloudera quickstart VM, spark 2.0

sdinesh94
  • 1,138
  • 15
  • 32

0 Answers0