I have a data frame like:
val df = sc.parallelize(List((1, 2012, 3, 5), (2, 2012, 4, 7), (1,2013, 1, 3), (2, 2013, 9, 5))).toDF("id", "year", "propA", "propB")
as a result of this code inspired by Pivot Spark Dataframe:
import org.apache.spark.sql.functions._
import sq.implicits._
years = List("2012", "2013")
val numYears = years.length - 1
//
var query2 = "select id, "
for (i <- 0 to numYears-1) {
query2 += "case when year = " + years(i) + " then propA else 0 end as " + "propA" + years(i) + ", "
query2 += "case when year = " + years(i) + " then propB else 0 end as " + "propB" + years(i) + ", "
}
query2 += "case when year = " + years.last + " then propA else 0 end as " + "propA" + years.last + ", "
query2 += "case when year = " + years.last + " then propB else 0 end as " + "propB" + years.last + " from myTable"
//
df.registerTempTable("myTable")
//
val myDF1 = sq.sql(query2)
I managed to get:
+---+---------+---------+---------+---------+
//| | id|propA2012|propB2012|propA2013|propB2013|
//| +---+---------+---------+---------+---------+
//| | 1| 3| 5| 0| 0|
//| | 2| 4| 7| 0| 0|
//| | 1| 0| 0| 1| 3|
//| | 2| 0| 0| 9| 5|
//| +---+---------+---------+---------+---------+
I managed to reduce to
id propA-2012 propB-2012 propA-2013 propB-2013
1 3 5 1 3
2 4 7 9 5
using:
val df2 = myDF1.groupBy("id").agg(
"propA2012" -> "sum",
"propA2013" -> "sum",
"propB2013" -> "sum",
"propB2012" -> "sum")
Is there a way to just iterate over all columns without specifying the column names?