I'm trying to convert a string column to Timestamp column which is in the format:
c1 | c2 |
---|---|
2019-12-10 10:07:54.000 | 2019-12-13 10:07:54.000 |
2020-06-08 15:14:49.000 | 2020-06-18 10:07:54.000 |
from pyspark.sql.functions import col, udf, to_timestamp
joined_df.select(to_timestamp(joined_df.c1, '%Y-%m-%d %H:%M:%S.%SSSS').alias('dt')).collect()
joined_df.select(to_timestamp(joined_df.c2, '%Y-%m-%d %H:%M:%S.%SSSS').alias('dt')).collect()
When the dates are changed, I want a new column Date difference by subtracting c2-c1
In python I'm doing it:
df['c1'] = df['c1'].fillna('0000-01-01').apply(lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S.%f'))
df['c2'] = df['c2'].fillna('0000-01-01').apply(lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S.%f'))
df['days'] = (df['c2'] - df['c1']).apply(lambda x: x.days)
Can anyone help how to convert to pyspark?