I have a spark dataframe of the below format:
+--------------------+
|value |
+--------------------+
|Id,date |
|000027,2017-11-14 |
|000045,2017-11-15 |
|000056,2018-09-09 |
|C000056,2018-07-01 |
+--------------------+
I need to loop through each row, split it by comma (,) and then place the values in different columns (Id and date as two separate columns).
I am new to spark, not sure whether it could be done through lambda function. Any suggestions would be appreciated.