Using Python's Pandas, one can do bulk operations on multiple columns in one pass like this:
# assuming we have a DataFrame with, among others, the following columns
cols = ['col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8']
df[cols] = df[cols] / df['another_column']
Is there a similar functionality using Spark in Scala?
Currently I end up doing:
val df2 = df.withColumn("col1", $"col1" / $"another_column")
.withColumn("col2", $"col2" / $"another_column")
.withColumn("col3", $"col3" / $"another_column")
.withColumn("col4", $"col4" / $"another_column")
.withColumn("col5", $"col5" / $"another_column")
.withColumn("col6", $"col6" / $"another_column")
.withColumn("col7", $"col7" / $"another_column")
.withColumn("col8", $"col8" / $"another_column")