0

Split the hour into 15 mins, add new column for every 15 min time frame and respective sum.

Here I have used the window function: How to group by time interval in Spark SQL, can someone help how to add hour_part column or any approach other than window function.

Input:

id,datetime,quantity
1234,2018-01-01 12:00:21,10
1234,2018-01-01 12:01:02,20
1234,2018-01-01 12:10:23,10
1234,2018-01-01 12:20:19,25
1234,2018-01-01 12:25:20,25
1234,2018-01-01 12:28:00,25
1234,2018-01-01 12:47:25,10
1234,2018-01-01 12:58:00,40

OutPut:

id,date,hour_part,sum
1234,2018-01-01,1,40
1234,2018-01-01,2,75
1234,2018-01-01,3,0
1234,2018-01-01,4,50
  • `1234,2018-01-01,3,0` I do investigate this option to be enabled as part of the outcome from window functions. – sathya Aug 12 '20 at 17:50

1 Answers1

0

the below code might helpful for your hour addition, but AFAIK window function is efficient to solve this problem of running aggregations.

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._

val df=Seq(("1234","2018-01-01 12:00:21",10),
("1234","2018-01-01 12:01:02",20),
("1234","2018-01-01 12:10:23",10),
("1234","2018-01-01 12:20:19",25),
("1234","2018-01-01 12:25:20",25),
("1234","2018-01-01 12:28:00",25),
("1234","2018-01-01 12:47:25",10),
("1234","2018-01-01 12:58:00",40)).toDF("id","datetime","quantity")

val windowSpec  = Window.partitionBy(lit("A")).orderBy(lit("A"))

df.groupBy($"id", window($"datetime", "15 minutes")).sum("quantity").orderBy("window")
.withColumn("hour_part",row_number.over(windowSpec))
.withColumn("date",to_date($"window.end")).withColumn("sum",$"sum(quantity)")
.drop($"window").drop($"sum(quantity)").show()

/*
+----+---------+----------+---+
|  id|hour_part|      date|sum|
+----+---------+----------+---+
|1234|        1|2018-01-01| 40|
|1234|        2|2018-01-01| 75|
|1234|        3|2018-01-01| 50|
+----+---------+----------+---+
*/
sathya
  • 1,982
  • 1
  • 20
  • 37