I want to implement the below logic in Azure databricks using pyspark. I have a below file which has multiple sheets in it. the file is present on adls gen 2. I want to read the data of all sheets into a different file and write the file to some location in adls gen 2 itself.
Note: All sheet has same schema ( Id, Name)
My final output file should have data from all the sheets. Also I need to create an additional column which stores the sheetName info