I need to read 150 times from my S3 bucket
df1 = spark.read.json('s3://mybucket/f1')
df2 = spark.read.json('s3://mybucket/f2')
...
df150 = spark.read.json('s3://mybucket/f150')
How to automate this process?
spark.read.json produces Spark Dataframe.
If I try what Oscar suggested
import spark
your_dfs_list = [spark.read.json("s3://cw-mybuc/RECORDS/FULL_RECEIVED/2020/07/01/00"+str(x)) for x in range(1,38)]
AttributeError: module 'spark' has no attribute 'read'