writing a file s3 using spark usually creates a directory with two files success and the other file name starts with name as part which has actual data in s3 , how to load the same file using pandas dataframe since the file path changes because the file name Par with actual data varies in each run.
For example the file path at the time of writing : df. write. colaesce("s3"\testfolder.csv)
file stored in directory are sucess part-00-
i have a python job which reads the file to pandas dataframe
pd.read(s3\..........what is the path to specify here.................)