3

I am trying to configure SSL between Spark and Cassandra. Passing a local filepath for trustStore works, whereas passing HDFS filepath doesn't work. It throws an error as File Not Found, both in Yarn client and cluster mode.

sparkConf.set("spark.cassandra.connection.ssl.enabled", "true");
sparkConf.set("spark.cassandra.connection.ssl.trustStore.password", "password");
sparkConf.set("spark.cassandra.connection.ssl.trustStore.path", "jks file path");

Any idea why does it happen? The same file works through sc.textfile().

Exception:
About to save to Cassandra.16/07/22 08:56:55 ERROR org.apache.spark.streaming.scheduler.JobScheduler: Error running job streaming job 1469177810000 ms.0 
java.io.FileNotFoundException: hdfs:/abc/ssl.jks (No such file or directory) 
at java.io.FileInputStream.open0(Native Method) 

Thanks
Hema

Zach
  • 539
  • 1
  • 4
  • 22
Hemalatha
  • 85
  • 1
  • 10

1 Answers1

2

This happens because SSL parameters are used by Java driver that doesn't know anything about HDFS. You need to put truststore & keystore to every node into the same location, and specify it in config parameters.

I'll flag this issue to developers

Alex Ott
  • 80,552
  • 8
  • 87
  • 132