-1

I have a dataframe nameDF as below:

scala> val nameDF = Seq(("John","A"), ("John","B"), ("John","C"), ("John","D"), ("Bravo","E"), ("Bravo","F"), ("Bravo","G")).toDF("Name","Init")
nameDF: org.apache.spark.sql.DataFrame = [Name: string, Init: string]

scala> nameDF.show
+------+----+
|Name  |Init|
+------+----+
|Johnny|   A|
|Johnny|   B|
|Johnny|   C|
|Johnny|   D|
|Bravo |   E|
|Bravo |   F|
|Bravo |   G|
+------+----+

Without using SQL, I am trying to group the names and convert the multiple rows of each "Name" into a single row as given below:

+------+-------+
|Name  |Init   |
+------+-------+
|Johnny|A,B,C,D|
|Bravo |E,F,G  |
+------+-------+

I see the available options to pivot are not suitable for String operations. enter image description here

Is Pivot the correct option in this case ? If not, could anyone let me know how can I achieve the solution ?

Metadata
  • 2,127
  • 9
  • 56
  • 127
  • 2
    Does this answer your question? [Spark SQL replacement for MySQL's GROUP\_CONCAT aggregate function](https://stackoverflow.com/questions/31640729/spark-sql-replacement-for-mysqls-group-concat-aggregate-function) – user10938362 Jan 23 '20 at 17:54

1 Answers1

2

Try this:

import org.apache.spark.sql.functions._

df.groupBy($"Name")
 .agg(concat_ws(",", sort_array(collect_list($"Init"))).as("Init"))
Raphael Roth
  • 26,751
  • 15
  • 88
  • 145
  • Raphael, Your solution works but I have a doubt here. AGG is an aggregate function. How is it working on a column that has String content ? – Metadata Jan 24 '20 at 06:25