1

I am running a Ubuntu instance to run a calculation of azure using a N-series instance. After the calculation I try to write to a Azure blob container using the wasb like URL

wasb://containername/path

I am trying to use the pyspark command

sparkSession.write.save('wasb://containername/path', format='json', mode='append')

But I receive a Java io exception from spark saying it doesn't support a wasb file system. I was wondering if anyone knows how to write to a wasb address while not using a HDInsight instance?

GLalor
  • 55
  • 1
  • 8

1 Answers1

3

I haven't done it with the pyspark but here is how I did using scala and spark.

Add the dependency in sbt

"org.apache.hadoop" % "hadoop-azure" % "2.7.3"

Then define the file system to be used in the underlying Hadoop configurations.

val spark = SparkSession.builder().appName("read azure storage").master("local[*]").getOrCreate()

spark.sparkContext.hadoopConfiguration.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
spark.sparkContext.hadoopConfiguration.set("fs.azure.account.key.yourAccount.blob.core.windows.net", "yourKey ")

val baseDir = "wasb[s]://BlobStorageContainer@yourUser.blob.core.windows.net/"

Now write the dataframe to blob container

resultDF.write.mode(SaveMode.Append).json(baseDir + outputPath)

Hope this is helpful here was the working program

koiralo
  • 22,594
  • 6
  • 51
  • 72
  • The link you have shared shows only reading data from azure and not writing. – Nandha Jun 07 '19 at 07:03
  • 1
    Once you add the file system configs you should able to read and write. There is example above to write the dataframe. – koiralo Jun 07 '19 at 08:29