16

I have a dataframe in Pyspark with a date column called "report_date".

I want to create a new column called "report_date_10" that is 10 days added to the original report_date column.

Below is the code I tried:

df_dc["report_date_10"] = df_dc["report_date"] + timedelta(days=10)

This is the error I got:

AttributeError: 'datetime.timedelta' object has no attribute '_get_object_id'

Help! thx

M Ismail
  • 43
  • 1
  • 9
PineNuts0
  • 4,740
  • 21
  • 67
  • 112
  • How to do this is essentially the example provided in [how to create good reproducible apache spark dataframe examples](https://stackoverflow.com/questions/48427185/how-to-make-good-reproducible-apache-spark-dataframe-examples). – pault Jun 05 '18 at 15:39

1 Answers1

38

It seems you are using the pandas syntax for adding a column; For spark, you need to use withColumn to add a new column; For adding the date, there's the built in date_add function:

import pyspark.sql.functions as F
df_dc = spark.createDataFrame([['2018-05-30']], ['report_date'])

df_dc.withColumn('report_date_10', F.date_add(df_dc['report_date'], 10)).show()
+-----------+--------------+
|report_date|report_date_10|
+-----------+--------------+
| 2018-05-30|    2018-06-09|
+-----------+--------------+
Psidom
  • 209,562
  • 33
  • 339
  • 356
  • 2
    how to add column value with current_date instead of fixed value? For example: another column which has integer value which need to be added with current date so that each row will have different date. – Innovator-programmer Aug 08 '22 at 13:25