In Scala/Spark, having a dataframe:
val dfIn = sqlContext.createDataFrame(Seq(
("r0", 0, 2, 3),
("r1", 1, 0, 0),
("r2", 0, 2, 2))).toDF("id", "c0", "c1", "c2")
I would like to compute a new column maxCol
holding the name of the column corresponding to the max value (for each row). With this example, the output should be:
+---+---+---+---+------+
| id| c0| c1| c2|maxCol|
+---+---+---+---+------+
| r0| 0| 2| 3| c2|
| r1| 1| 0| 0| c0|
| r2| 0| 2| 2| c1|
+---+---+---+---+------+
Actually the dataframe have more than 60 columns. Thus a generic solution is required.
The equivalent in Python Pandas (yes, I know, I should compare with pyspark...) could be:
dfOut = pd.concat([dfIn, dfIn.idxmax(axis=1).rename('maxCol')], axis=1)