8

I have a dataframe like this, shown only two columns however there are many columns in original dataframe

data = [(("ID1", 3, 5)), (("ID2", 4, 12)), (("ID3", 8, 3))]
df = spark.createDataFrame(data, ["ID", "colA", "colB"])
df.show()

+---+----+----+
| ID|colA|colB|
+---+----+----+
|ID1|   3|   5|
|ID2|   4|  12|
|ID3|   8|   3|
+---+----+----+

I want to extract the name of the column per row, which has the max value. Hence the expected output is like this

+---+----+----+-------+
| ID|colA|colB|Max_col|
+---+----+----+-------+
|ID1|   3|   5|   colB|
|ID2|   4|  12|   colB|
|ID3|   8|   3|   colA|
+---+----+----+-------+

In case of tie, where colA and colB have same value, choose the first column.

How can I achieve this in pyspark

Hardik Gupta
  • 4,700
  • 9
  • 41
  • 83
  • Possible duplicate of [Scala/Spark dataframes: find the column name corresponding to the max](https://stackoverflow.com/questions/42485108/scala-spark-dataframes-find-the-column-name-corresponding-to-the-max) – user10938362 May 31 '19 at 07:55
  • 1
    Possible duplicate of [how to get the name of column with maximum value in pyspark dataframe](https://stackoverflow.com/questions/46819405/how-to-get-the-name-of-column-with-maximum-value-in-pyspark-dataframe) – Suresh May 31 '19 at 16:59

5 Answers5

11

You can use UDF on each row for row wise computation and use struct to pass multiple columns to udf. Hope this helps.

from pyspark.sql import functions as F
from pyspark.sql.types import IntegerType
from operator import itemgetter

data = [(("ID1", 3, 5,78)), (("ID2", 4, 12,45)), (("ID3", 70, 3,67))]
df = spark.createDataFrame(data, ["ID", "colA", "colB","colC"])
df.show()

+---+----+----+----+
| ID|colA|colB|colC|
+---+----+----+----+
|ID1|   3|   5|  78|
|ID2|   4|  12|  45|
|ID3|  70|   3|  70|
+---+----+----+----+
cols = df.columns

# to get max of values in a row
maxcol = F.udf(lambda row: max(row), IntegerType())
maxDF = df.withColumn("maxval", maxcol(F.struct([df[x] for x in df.columns[1:]])))
maxDF.show()

+---+----+----+----+-------+
|ID |colA|colB|colC|Max_col|
+---+----+----+----+-------+
|ID1|3   |5   |78  |78     |
|ID2|4   |12  |45  |45     |
|ID3|70  |3   |67  |70     |
+---+----+----+----+-------+

# to get max of value & corresponding column name

schema=StructType([StructField('maxval',IntegerType()),StructField('maxval_colname',StringType())])

maxcol = F.udf(lambda row: max(row,key=itemgetter(0)), schema)
maxDF = df.withColumn('maxfield', maxcol(F.struct([F.struct(df[x],F.lit(x)) for x in df.columns[1:]]))).\
select(df.columns+['maxfield.maxval','maxfield.maxval_colname'])

+---+----+----+----+------+--------------+
| ID|colA|colB|colC|maxval|maxval_colname|
+---+----+----+----+------+--------------+
|ID1| 3  | 5  | 78 | 78   | colC         |
|ID2| 4  | 12 | 45 | 45   | colC         |
|ID3| 70 | 3  | 67 | 68   | colA         |
+---+----+----+----+------+--------------+
Suresh
  • 5,678
  • 2
  • 24
  • 40
5

There are multiple options to achieve this. I am a providing example for one and can provide a hint for rest-

from pyspark.sql import functions as F
from pyspark.sql.window import Window as W
from pyspark.sql import types as T

data = [(("ID1", 3, 5)), (("ID2", 4, 12)), (("ID3", 8, 3))]
df = spark.createDataFrame(data, ["ID", "colA", "colB"])
df.show()

+---+----+----+
| ID|colA|colB|
+---+----+----+
|ID1|   3|   5|
|ID2|   4|  12|
|ID3|   8|   3|
+---+----+----+

#Below F.array creates an array of column name and value pair like [['colA', 3], ['colB', 5]] then F.explode break this array into rows like different column and value pair should be in different rows

df = df.withColumn(
    "max_val",
    F.explode(
        F.array([
            F.array([F.lit(cl), F.col(cl)]) for cl in df.columns[1:]
        ])
    )
)
df.show()
+---+----+----+----------+
| ID|colA|colB|   max_val|
+---+----+----+----------+
|ID1|   3|   5| [colA, 3]|
|ID1|   3|   5| [colB, 5]|
|ID2|   4|  12| [colA, 4]|
|ID2|   4|  12|[colB, 12]|
|ID3|   8|   3| [colA, 8]|
|ID3|   8|   3| [colB, 3]|
+---+----+----+----------+

#Then select columns so that column name and value should be in different columns
df = df.select(
    "ID", 
    "colA", 
    "colB", 
    F.col("max_val").getItem(0).alias("col_name"),
    F.col("max_val").getItem(1).cast(T.IntegerType()).alias("col_value"),
)
df.show()
+---+----+----+--------+---------+
| ID|colA|colB|col_name|col_value|
+---+----+----+--------+---------+
|ID1|   3|   5|    colA|        3|
|ID1|   3|   5|    colB|        5|
|ID2|   4|  12|    colA|        4|
|ID2|   4|  12|    colB|       12|
|ID3|   8|   3|    colA|        8|
|ID3|   8|   3|    colB|        3|
+---+----+----+--------+---------+

# Rank column values based on ID in desc order
df = df.withColumn(
    "rank",
    F.rank().over(W.partitionBy("ID").orderBy(F.col("col_value").desc()))
)
df.show()
+---+----+----+--------+---------+----+
| ID|colA|colB|col_name|col_value|rank|
+---+----+----+--------+---------+----+
|ID2|   4|  12|    colB|       12|   1|
|ID2|   4|  12|    colA|        4|   2|
|ID3|   8|   3|    colA|        8|   1|
|ID3|   8|   3|    colB|        3|   2|
|ID1|   3|   5|    colB|        5|   1|
|ID1|   3|   5|    colA|        3|   2|
+---+----+----+--------+---------+----+

#Finally Filter rank = 1 as max value have rank 1 because we ranked desc value
df.where("rank=1").show()
+---+----+----+--------+---------+----+
| ID|colA|colB|col_name|col_value|rank|
+---+----+----+--------+---------+----+
|ID2|   4|  12|    colB|       12|   1|
|ID3|   8|   3|    colA|        8|   1|
|ID1|   3|   5|    colB|        5|   1|
+---+----+----+--------+---------+----+

Other Options are -

  • Use UDF on your base df and return column name having a max value
  • In the same example after making the column name and value column instead of rank use group by ID take max col_value. Then join with the previous df.
Rakesh Kumar
  • 4,319
  • 2
  • 17
  • 30
2

You can use the RDD API to add the new column:

df.rdd.map(lambda r: r.asDict())\
       .map(lambda r: Row(Max_col=max([i for i in r.items() if i[0] != 'ID'], 
                                      key=lambda kv: kv[1])[0], **r) )\
       .toDF()

Resulting in:

+---+-------+----+----+
| ID|Max_col|colA|colB|
+---+-------+----+----+
|ID1|   colB|   3|   5|
|ID2|   colB|   4|  12|
|ID3|   colA|   8|   3|
+---+-------+----+----+
ernest_k
  • 44,416
  • 5
  • 53
  • 99
2

Extending what Suresh has done.... returning appropriate the column name

from pyspark.sql import functions as f
from pyspark.sql.types import IntegerType, StringType

import numpy as np

data = [(("ID1", 3, 5,78)), (("ID2", 4, 12,45)), (("ID3", 68, 3,67))]
df = spark.createDataFrame(data, ["ID", "colA", "colB","colC"])
df.show()

cols = df.columns
maxcol = f.udf(lambda row: cols[row.index(max(row)) +1], StringType())

maxDF = df.withColumn("Max_col", maxcol(f.struct([df[x] for x in df.columns[1:]])))
maxDF.show(truncate=False)

+---+----+----+----+------+
|ID |colA|colB|colC|Max_col|
+---+----+----+----+------+
|ID1|3   |5   |78  |colC  |
|ID2|4   |12  |45  |colC  |
|ID3|68  |3   |67  |colA  |
+---+----+----+----+------+
Padmaraj Bhat
  • 136
  • 2
  • 10
1

try the following:

from  pyspark.sql import functions as F
data = [(("ID1", 3, 5)), (("ID2", 4, 12)), (("ID3", 8, 3))]
df = spark.createDataFrame(data, ["ID", "colA", "colB"])
df.withColumn('max_col',
   F.when(F.col('colA') > F.col('colB'), 'colA').
     otherwise('colB')).show()

Yields:

+---+----+----+-------+
| ID|colA|colB|max_col|
+---+----+----+-------+
|ID1|   3|   5|   colB|
|ID2|   4|  12|   colB|
|ID3|   8|   3|   colA|
+---+----+----+-------+
Elior Malul
  • 683
  • 6
  • 8
  • 1
    Hi Elior, the solution will work if Ihave just two columns, however I have many columns – Hardik Gupta May 31 '19 at 06:39
  • 1
    Hi Hadrid, sorry I missed that. – Elior Malul May 31 '19 at 06:52
  • 2
    Hi Hadrid, sorry I missed that. The schema of my proposed solution is this: (sorry I couldn't code it, you need to have version 2.4 for this, which I don't): 1. Add a column like so: `df.withColumn('arr', F.array('col1', 'col2', ... , 'coln'))` 2. Add a column `maxval`: `withColumn('max_val', F.array_max('arr'))` 3. Finally, using the `map` function (from the RDD), choose the column with value equals to the value in column `max_val`. – Elior Malul May 31 '19 at 07:02