I have a dataframe similar to
from pyspark.sql.functions import avg, first
rdd = sc.parallelize(
[
(0, "A", 223,"201603", "PORT"),
(0, "A", 22,"201602", "PORT"),
(0, "A", 22,"201603", "PORT"),
(0, "C", 22,"201605", "PORT"),
(0, "D", 422,"201601", "DOCK"),
(0, "D", 422,"201602", "DOCK"),
(0, "C", 422,"201602", "DOCK"),
(1,"B", 3213,"201602", "DOCK"),
(1,"A", 3213,"201602", "DOCK"),
(1,"C", 3213,"201602", "PORT"),
(1,"B", 3213,"201601", "PORT"),
(1,"B", 3213,"201611", "PORT"),
(1,"B", 3213,"201604", "PORT"),
(3,"D", 3999,"201601", "PORT"),
(3,"C", 323,"201602", "PORT"),
(3,"C", 323,"201602", "PORT"),
(3,"C", 323,"201605", "DOCK"),
(3,"A", 323,"201602", "DOCK"),
(2,"C", 2321,"201601", "DOCK"),
(2,"A", 2321,"201602", "PORT")
]
)
df_data = sqlContext.createDataFrame(rdd, ["id","type", "cost", "date", "ship"])
and I need to aggregate by id
and type
and get the highest occurrence of ship
per group. For example,
grouped = df_data.groupby('id','type', 'ship').count()
has a column with the number of times of each group:
+---+----+----+-----+
| id|type|ship|count|
+---+----+----+-----+
| 3| A|DOCK| 1|
| 0| D|DOCK| 2|
| 3| C|PORT| 2|
| 0| A|PORT| 3|
| 1| A|DOCK| 1|
| 1| B|PORT| 3|
| 3| C|DOCK| 1|
| 3| D|PORT| 1|
| 1| B|DOCK| 1|
| 1| C|PORT| 1|
| 2| C|DOCK| 1|
| 0| C|PORT| 1|
| 0| C|DOCK| 1|
| 2| A|PORT| 1|
+---+----+----+-----+
and I need to get
+---+----+----+-----+
| id|type|ship|count|
+---+----+----+-----+
| 0| D|DOCK| 2|
| 0| A|PORT| 3|
| 1| A|DOCK| 1|
| 1| B|PORT| 3|
| 2| C|DOCK| 1|
| 2| A|PORT| 1|
| 3| C|PORT| 2|
| 3| A|DOCK| 1|
+---+----+----+-----+
I tried to use a combination of
grouped.groupby('id', 'type', 'ship')\
.agg({'count':'max'}).orderBy('max(count)', ascending=False).\
groupby('id', 'type', 'ship').agg({'ship':'first'})
But it fails. Is there a way to get the maximum row from a count of a group by?
On pandas this oneliner does the job:
df_pd = df_data.toPandas()
df_pd_t = df_pd[df_pd['count'] == df_pd.groupby(['id','type', ])['count'].transform(max)]