I have a dataframe df
with the following schema:
ts:double
key:long
val:long
I want to convert the entire ts
field to spark TimeStamp but without dropping any column. I can know how to do a select and , something line:
val new_df = df.select($"ts".cast(TimestampType))
However, new_df
has only one column (as expected). I can do an join and be done but that's probably not a good approach. I would like to do something like
val new_df = df.map(udf(col("ts"))
that will generate a new_df
with columns ts
(correctly casted), key and value.