You get the job done making a detour to an RDD and using 'getValuesMap'.
val dfIn = Seq(
("first", 2.0, 1., 2.1, 5.4),
("test", 1.5, 0.5, 0.9, 3.7),
("choose", 7., 2.9, 9.1, 2.5)
).toDF("name","column1","column2","column3","column4")
The simple solution is
val dfOut = dfIn.rdd
.map(r => (
r.getString(0),
r.getValuesMap[Double](r.schema.fieldNames.filter(_!="name"))
))
.map{case (n,m) => (n,m.maxBy(_._2)._1)}
.toDF("name","max_column")
But if you want to take back all columns from the original dataframe (like in Scala/Spark dataframes: find the column name corresponding to the max), you have to play a bit with merging rows and extending the schema
import org.apache.spark.sql.types.{StructType,StructField,StringType}
import org.apache.spark.sql.Row
val dfOut = sqlContext.createDataFrame(
dfIn.rdd
.map(r => (r, r.getValuesMap[Double](r.schema.fieldNames.drop(1))))
.map{case (r,m) => Row.merge(r,(Row(m.maxBy(_._2)._1)))},
dfIn.schema.add(StructField("max_column",StringType))
)