60

I have a dataframe of

date, string, string

I want to select dates before a certain period. I have tried the following with no luck

 data.filter(data("date") < new java.sql.Date(format.parse("2015-03-14").getTime))

I'm getting an error stating the following

org.apache.spark.sql.AnalysisException: resolved attribute(s) date#75 missing from date#72,uid#73,iid#74 in operator !Filter (date#75 < 16508);

As far as I can guess the query is incorrect. Can anyone show me what way the query should be formatted?

I checked that all enteries in the dataframe have values - they do.

oluies
  • 17,694
  • 14
  • 74
  • 117
Steve
  • 21,163
  • 21
  • 69
  • 92

7 Answers7

84

The following solutions are applicable since spark 1.5 :

For lower than :

// filter data where the date is lesser than 2015-03-14
data.filter(data("date").lt(lit("2015-03-14")))      

For greater than :

// filter data where the date is greater than 2015-03-14
data.filter(data("date").gt(lit("2015-03-14"))) 

For equality, you can use either equalTo or === :

data.filter(data("date") === lit("2015-03-14"))

If your DataFrame date column is of type StringType, you can convert it using the to_date function :

// filter data where the date is greater than 2015-03-14
data.filter(to_date(data("date")).gt(lit("2015-03-14"))) 

You can also filter according to a year using the year function :

// filter data where year is greater or equal to 2016
data.filter(year($"date").geq(lit(2016))) 
eliasah
  • 39,588
  • 11
  • 124
  • 154
  • 1
    Is there any option like `between` for date column in spark? Also i have date in 'dd/MM/yyyy' format. – Sivailango Nov 26 '15 at 12:41
  • @Sivailango Of course, it's filter on between, check my answer [here](http://stackoverflow.com/questions/33938806/how-to-filter-date-range-using-data-bricks-in-apache-spark/33939650#33939650) – eliasah Nov 26 '15 at 13:14
  • `df.select(df("ID"), date_format(df("Week_Ending_Date"), "yyyy-MM-dd")) .filter(date_format(df("Week_Ending_Date"), "yyyy-MM-dd").between("2015-07-05", "2015-09-02"))` Is it right? Also i am looking your another answer here http://stackoverflow.com/questions/33938806/how-to-filter-date-range-using-data-bricks-in-apache-spark – Sivailango Nov 26 '15 at 13:22
  • is there any way to tell gt o lt to be like now - 5 months? or i just have to calculate that date and give it to the function as string – Raul H Oct 04 '16 at 20:41
  • If you want to use current date with date diff, comparing dates will be different. – eliasah Oct 04 '16 at 20:48
  • Can we find null values in timestamp column? – Darshan Jul 30 '17 at 10:27
  • @Darshan if the column is null. You can examine it with the isNull function predicate whatever the time is. – eliasah Jul 30 '17 at 13:37
  • `lit` is in `pyspark.sql.functions` if you can't find it. – ijoseph Nov 14 '19 at 22:54
24

Don't use this as suggested in other answers

.filter(f.col("dateColumn") < f.lit('2017-11-01'))

But use this instead

.filter(f.col("dateColumn") < f.unix_timestamp(f.lit('2017-11-01 00:00:00')).cast('timestamp'))

This will use the TimestampType instead of the StringType, which will be more performant in some cases. For example Parquet predicate pushdown will only work with the latter.

Edit: Both snippets assume this import:

from pyspark.sql import functions as f
Ruurtjan Pul
  • 1,197
  • 1
  • 10
  • 21
19

I find the most readable way to express this is using a sql expression:

df.filter("my_date < date'2015-01-01'")

we can verify this works correctly by looking at the physical plan from .explain()

+- *(1) Filter (isnotnull(my_date#22) && (my_date#22 < 16436))
RobinL
  • 11,009
  • 8
  • 48
  • 68
  • 1
    This didn't work for me, but .filter("effectivedate > to_date('1900-02-02')") did work (for the situation related to me). Most likely I need some library loaded for the given solution to work. But all in all this was the best answer. – Harlan Nelson Apr 07 '21 at 13:53
  • Strange - it should work in vanilla pyspark. – RobinL Apr 07 '21 at 15:52
9

In PySpark(python) one of the option is to have the column in unix_timestamp format.We can convert string to unix_timestamp and specify the format as shown below. Note we need to import unix_timestamp and lit function

from pyspark.sql.functions import unix_timestamp, lit

df.withColumn("tx_date", to_date(unix_timestamp(df_cast["date"], "MM/dd/yyyy").cast("timestamp")))

Now we can apply the filters

df_cast.filter(df_cast["tx_date"] >= lit('2017-01-01')) \
       .filter(df_cast["tx_date"] <= lit('2017-01-31')).show()
Prathap Kudupu
  • 1,315
  • 11
  • 11
4
df=df.filter(df["columnname"]>='2020-01-13')
keikai
  • 14,085
  • 9
  • 49
  • 68
1

We can also use SQL kind of expression inside filter :


Note -> Here I am showing two conditions and a date range for future reference :


ordersDf.filter("order_status = 'PENDING_PAYMENT' AND order_date BETWEEN '2013-07-01' AND '2013-07-31' ")
Abhishek Sengupta
  • 2,938
  • 1
  • 28
  • 35
0

imho it should be like this:

import java.util.Date
import java.util.Calendar
import java.sql.Timestamp
import java.sql.Date

val jDate = Calendar.getInstance().getTime()
val sqlDateTime = new java.sql.Timestamp(jDate.getTime())
val sqlDate = new java.sql.Date(jDate.getTime())

data.filter(data("date").gt(sqlDate)) 
data.filter(data("date").gt(sqlDateTime))
iamcj
  • 27
  • 5