So, I'm using the most generic S3 read code in Spark, it reads multiple files in my specified directory into a single dataframe:
val df=spark.read.option("sep", "\t")
.option("inferSchema", "true")
.option("encoding","UTF-8")
.schema(sch)
.csv("s3://my-bucket/my-directory/")
What would be the best way (if any) to get the number of files that were read from this path?