I had one DataFrame
as A, like:
+---+---+---+---+----------+
|key| c1| c2| c3| date|
+---+---+---+---+----------+
| k1| -1| 0| -1|2015-04-28|
| k1| 1| -1| 1|2015-07-28|
| k1| 1| 1| 1|2015-10-28|
| k2| -1| 0| 1|2015-04-28|
| k2| -1| 1| -1|2015-07-28|
| k2| 1| -1| 0|2015-10-28|
+---+---+---+---+----------+
those codes to create A:
data = [('k1', '-1', '0', '-1','2015-04-28'),
('k1', '1', '-1', '1', '2015-07-28'),
('k1', '1', '1', '1', '2015-10-28'),
('k2', '-1', '0', '1', '2015-04-28'),
('k2', '-1', '1', '-1', '2015-07-28'),
('k2', '1', '-1', '0', '2015-10-28')]
A = spark.createDataFrame(data, ['key', 'c1', 'c2','c3','date'])
A = A.withColumn('date',A.date.cast('date'))
I want to get max of date for some columns from c1 to c5 on which the values is equal to 1 or -1. The expected result of B:
+---+----------+----------+----------+----------+----------+----------+
|key| c1_1| c2_1| c3_1| c1_-1| c2_-1| c3_-1|
+---+----------+----------+----------+----------+----------+----------+
| k1|2015-10-28|2015-10-28|2015-10-28|2015-04-28|2015-07-28|2015-04-28|
| k2|2015-10-28|2015-07-28|2015-04-28|2015-07-28|2015-10-28|2015-07-28|
+---+----------+----------+----------+----------+----------+----------+
My preview solution is to separately calculate columns from c1-c2 by using pivot operation, then join those DateFrames
created newly. But, in my situation, the columns is too many, I met the matter of performance. So, I hope get the other solution to substitute for join of DataFrame
.