2

The sparkcontext is created as below

SparkConf sparkConf = new SparkConf().setAppName(args[0]);
snappySes = new SnappySession(new SparkSession.Builder().config("spark.snappydata.connection", "localhost:1527").getOrCreate())

Read snappy data

snappySes.table("SNAPPY_COL_TABLE").show(10);

Job submitted as below

/usr/hdp/2.6.2.0-205/spark2/bin/spark-submit --conf --conf snappydata.connection=localhost:1527 --conf spark.ui.port=0 --master local[*] --driver-memory 2g --jars  --deploy-mode client --conf spark.driver.extraClassPath=/root/snappydata-1.0.1-bin/jars/* --conf  spark.executor.extraClassPath=/root/snappydata-1.0.1-bin/jars/* --class myclass

Job is connecting to snappydata ,logs below

Initializing SnappyData in cluster mode: Smart connector mode: sc = org.apache.spark.SparkContext@164d01ba, url = jdbc:snappydata://localhost[1527]/

But fails with table not found.This is pointing to a different store different tables are listed

If the same job is submitted with snappy's sparksubmit. Works as expected. Only change is submitting job is

/usr/hdp/2.6.2.0-205/spark2/bin/spark-submit --- fails
/root/snappydata-1.0.1-bin/bin/spark-submit ---- Pass

1 Answers1

0

Presumably you are running two snappydata clusters. And, somehow your localhost is not resolving uniformly? If you stop the snappy cluster do you get an exception when you submit to HDP ?

jagsr
  • 535
  • 2
  • 6
  • I have found the root cause the hive-site.xml in the spark conf was the issue.After removing it the connection was to existing/running snappy data.But this conf file cannot be removed so turned the code to jdbc connection – satish sidnakoppa May 17 '18 at 13:05