2

My dataframe looks like

+-------------------------+-----+
| Title| Status|Suite|ID  |Time |
+------+-------+-----+----+-----+
|KIM   | Passed|ABC  |123 |20   |
|KJT   | Passed|ABC  |123 |10   |
|ZXD   | Passed|CDF  |123 |15   |
|XCV   | Passed|GHY  |113 |36   |
|KJM   | Passed|RTH  |456 |45   |
|KIM   | Passed|ABC  |115 |47   |
|JY    | Passed|JHJK |8963|74   |
|KJH   | Passed|SNMP |256 |47   |
|KJH   | Passed|ABC  |123 |78   |
|LOK   | Passed|GHY  |456 |96   |
|LIM   | Passed|RTH  |113 |78   |
|MKN   | Passed|ABC  |115 |74   |
|KJM   | Passed|GHY  |8963|74   |
+------+-------+-----+----+-----+

which can be created using

df = sqlCtx.createDataFrame(
[
    ('KIM', 'Passed', 'ABC', '123',20),
    ('KJT', 'Passed', 'ABC', '123',10),
    ('ZXD', 'Passed', 'CDF', '123',15),
    ('XCV', 'Passed', 'GHY', '113',36),
    ('KJM', 'Passed', 'RTH', '456',45),
    ('KIM', 'Passed', 'ABC', '115',47),
    ('JY', 'Passed', 'JHJK', '8963',74),
    ('KJH', 'Passed', 'SNMP', '256',47),
    ('KJH', 'Passed', 'ABC', '123',78),
    ('LOK', 'Passed', 'GHY', '456',96),
    ('LIM', 'Passed', 'RTH', '113',78),
    ('MKN', 'Passed', 'ABC', '115',74),
    ('KJM', 'Passed', 'GHY', '8963',74),     
],('Title', 'Status', 'Suite', 'ID','Time')

)

I need to apply group by on ID and aggregationon Time and in the result I need to get the Title, Status & Suite too along with ID.

My output it should be like

+-------------------------+-----+
| Title| Status|Suite|  ID|Time |
+------+-------+-----+----+-----+
|KIM   | Passed|ABC  |123 |30.75|
|XCV   | Passed|GHY  |113 |57   |
|KJM   | Passed|RTH  |456 |70.5 | 
|KIM   | Passed|ABC  |115 |60.5 |
|JY    | Passed|JHJK |8963|74   |
|KJH   | Passed|SNMP |256 |47   |
+------+-------+-----+----+-----+

I have tried the below code. But it is only giving me the values in ID in result

df.groupBy("ID").agg(mean("Time").alias("Time"))
Krishna
  • 1,089
  • 5
  • 24
  • 38
  • I'm performing group by on ID & mean on Runtime. I need the other column's along with ID & Runtime also. The values of Status & Title may vary. I just picked the first one with ID – Krishna Jan 31 '18 at 13:47

1 Answers1

4

With modified expected output you can get arbitrary values with first:

from pyspark.sql.functions import avg, first

df.groupBy("id").agg(
    first("Title"), first("Status"), first("Suite"), avg("Time")
).toDF("id", "Title", "Status", "Suite", "Time").show()
# +----+-----+------+-----+-----+
# |  id|Title|Status|Suite| Time|
# +----+-----+------+-----+-----+
# | 113|  XCV|Passed|  GHY| 57.0|
# | 256|  KJH|Passed| SNMP| 47.0|
# | 456|  KJM|Passed|  RTH| 70.5|
# | 115|  KIM|Passed|  ABC| 60.5|
# |8963|   JY|Passed| JHJK| 74.0|
# | 123|  KIM|Passed|  ABC|30.75|
# +----+-----+------+-----+-----+

Original answer

It looks like you want to drop_duplicates:

df.drop_duplicates(subset=["ID"]).show()
# +-----+------+-----+----+                                                       
# |Title|Status|Suite|  ID|
# +-----+------+-----+----+
# |  XCV|Passed|  GHY| 113|
# |  KJH|Passed| SNMP| 256|
# |  KJM|Passed|  RTH| 456|
# |  KIM|Passed|  ABC| 115|
# |   JY|Passed| JHJK|8963|
# |  KIM|Passed|  ABC| 123|
# +-----+------+-----+----+

If you want to use specific row please refer to Find maximum row per group in Spark DataFrame

Alper t. Turker
  • 34,230
  • 9
  • 83
  • 115