0

What I'd like to know is if the following is permissible using pyspark: Assume the following df:

|model  |  year  | price   |    mileage |
+++++++++++++++++++++++++++++++++++++++++
|Galaxy | 2017   | 27841   |17529       |
|Galaxy | 2017   | 29395   |11892       |
|Novato | 2018   | 35644   |22876       |
|Novato | 2018   |  8765   |54817       |


df.groupBy('model', 'year')\
  .agg({'price':'sum'})\
  .agg({'mileage':sum'})\
  .withColumnRenamed('sum(price)', 'total_prices')\
  .withColumnRenamed('sum(mileage)', 'total_miles')

Hopefully resulting in

|model  |  year  | price   |    mileage | total_prices| total_miles|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|Galaxy | 2017   | 27841   |17529       |    57236    |     29421  |
|Galaxy | 2017   | 29395   |11892       |    57236    |     29421  |
|Novato | 2018   | 35644   |22876       |    44409    |     77693  |
|Novato | 2018   |  8765   |54817       |    44409    |     77693  |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Thom Rogers
  • 1,385
  • 2
  • 20
  • 33
  • Check this too: https://stackoverflow.com/questions/34409875/how-to-get-other-columns-when-using-spark-dataframe-groupby – Praneeth Jun 22 '19 at 19:39

2 Answers2

0

using pandas udf, you can get any no of aggregations

import pyspark.sql.functions as F
from pyspark.sql.types import IntegerType,StructType,StructField,StringType
import pandas as pd

agg_schema = StructType(
    [StructField("model", StringType(), True),
     StructField("year", IntegerType(), True),
     StructField("price", IntegerType(), True),
     StructField("mileage", IntegerType(), True),
     StructField("total_prices", IntegerType(), True),
     StructField("total_miles", IntegerType(), True)
     ]
)

@F.pandas_udf(agg_schema, F.PandasUDFType.GROUPED_MAP)
def agg(pdf):
    total_prices = pdf['price'].sum()
    total_miles = pdf['mileage'].sum()
    pdf['total_prices'] = total_prices
    pdf['total_miles'] = total_miles
    return pdf

df = spark.createDataFrame(
    [('Galaxy', 2017, 27841, 17529),
     ('Galaxy', 2017, 29395, 11892),
     ('Novato', 2018, 35644, 22876),
     ('Novato', 2018, 8765,  54817)],
    ['model','year','price','mileage']
)
df.groupBy('model','year').apply(agg).show()

which results in

+------+----+-----+-------+------------+-----------+
| model|year|price|mileage|total_prices|total_miles|
+------+----+-----+-------+------------+-----------+
|Galaxy|2017|27841|  17529|       57236|      29421|
|Galaxy|2017|29395|  11892|       57236|      29421|
|Novato|2018|35644|  22876|       44409|      77693|
|Novato|2018| 8765|  54817|       44409|      77693|
+------+----+-----+-------+------------+-----------+
Ranga Vure
  • 1,922
  • 3
  • 16
  • 23
0

You are not actually looking for a groupby, your are looking for a window function or a join because you want to extend your rows with aggregated values.

Window:

from pyspark.sql import functions as F
from pyspark.sql import Window

df = spark.createDataFrame(
    [('Galaxy', 2017, 27841, 17529),
     ('Galaxy', 2017, 29395, 11892),
     ('Novato', 2018, 35644, 22876),
     ('Novato', 2018, 8765,  54817)],
    ['model','year','price','mileage']
)

w = Window.partitionBy('model', 'year')

df = df.withColumn('total_prices', F.sum('price').over(w))
df = df.withColumn('total_miles', F.sum('mileage').over(w))
df.show()

Join:

from pyspark.sql import functions as F

df = spark.createDataFrame(
    [('Galaxy', 2017, 27841, 17529),
     ('Galaxy', 2017, 29395, 11892),
     ('Novato', 2018, 35644, 22876),
     ('Novato', 2018, 8765,  54817)],
    ['model','year','price','mileage']
)

df = df.join(df.groupby('model', 'year').agg(F.sum('price').alias('total_price'), F.sum('mileage').alias('total_miles')), ['model', 'year'])
df.show()

Output:

+------+----+-----+-------+------------+-----------+ 
| model|year|price|mileage|total_prices|total_miles| 
+------+----+-----+-------+------------+-----------+ 
|Galaxy|2017|27841|  17529|       57236|      29421| 
|Galaxy|2017|29395|  11892|       57236|      29421| 
|Novato|2018|35644|  22876|       44409|      77693| 
|Novato|2018| 8765|  54817|       44409|      77693| 
+------+----+-----+-------+------------+-----------+
cronoik
  • 15,434
  • 3
  • 40
  • 78