I want to read Azure Blob storage files into spark using databricks. But I do not want to set a specific file or *
for each level of nesting.
The standard: is **/*/
not working.
These work just fine:
val df = spark.read.format("avro").load("dbfs:/mnt/foo/my_file/0/2019/08/24/07/54/10.avro")
val df = spark.read.format("avro").load("dbfs:/mnt/foo/my_file/*/*/*/*/*/*")
fails with:
java.io.FileNotFoundException: No Avro files found. If files don't have .avro extension, set ignoreExtension to true
for
val df = spark.read.format("avro").load("dbfs:/foo/my_file/test/**/*")