3

I'm trying to load a managed hive table created in ORC format with spark sql.

SparkConf conf = new SparkConf().setAppName(ConnectionTest.class.getName()).setMaster(master);
JavaSparkContext context = new JavaSparkContext(conf);

SQLContext sqlContext = new HiveContext(context);

sqlContext.sql("SELECT * FROM schema.tableName").show(20);

But I'm getting this error:

Exception in thread "main" java.lang.RuntimeException: serious problem
    at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
    at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
    at org.apache.spark.rdd.HadoopRDD$HadoopMapPartitionsWithSplitRDD.getPartitions(HadoopRDD.scala:381)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:190)
    at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
    at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
    at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
    at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
    at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
    at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
    at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
    at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
    at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
    at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
    at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
    at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
    at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
    at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
    at com.daimler.dbdp.spark.ConnectionTest.run(ConnectionTest.java:45)
    at com.daimler.dbdp.spark.ConnectionTest.main(ConnectionTest.java:26)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$BISplitStrategy.getSplits(OrcInputFormat.java:560)
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1010)
        ... 49 more

Seems to be something related to ORC format. What is the best way to access hive tables when ORC format is used?

thanks!!!

spark 1.6.2. java 8 hortonworks dist.

Sam
  • 139
  • 1
  • 10
josele
  • 97
  • 2
  • 11
  • 1
    Did you try `sqlContext.table("schema.tableName").show()` – Thiago Baldim Apr 19 '17 at 17:00
  • just did it. But bad luck. Thanks anyway – josele Apr 20 '17 at 08:07
  • I faced exactly same issues. The table is designed with transactional=true property. I made it to false, and this error gone. But, i would prefer to use true. – Bala May 25 '17 at 11:57
  • Pplease do let us know workaround to solve this. I am also facing same issue. This issue is mostly with **ORC tables**, with **transactional=true**. Any solution will be helpful. Thanks. Also do let me know what are the other allowed formats of table which supports transactions or acid operations on hive table. – Sam Oct 12 '17 at 11:04

1 Answers1

2

you can try setting following parameter in spark

scala> sql("set spark.sql.hive.convertMetastoreOrc=true") 
// output = res3: org.apache.spark.sql.DataFrame = [key: string, value: string] 

Then execute the query on ORC table in spark.

If after setting above parameter also you face the issue, you can try setting following param.

scala> sql("set spark.sql.orc.impl=native")
// output = res4: org.apache.spark.sql.DataFrame = [key: string, value: string]
axnet
  • 5,146
  • 3
  • 25
  • 45