7

I am facing an issue of how to split a multi-value column, i.e. List[String], into separate rows.

The initial dataset has following types: Dataset[(Integer, String, Double, scala.List[String])]

+---+--------------------+-------+--------------------+
| id|       text         | value |    properties      |
+---+--------------------+-------+--------------------+
|  0|Lorem ipsum dolor...|    1.0|[prp1, prp2, prp3..]|
|  1|Lorem ipsum dolor...|    2.0|[prp4, prp5, prp6..]|
|  2|Lorem ipsum dolor...|    3.0|[prp7, prp8, prp9..]|

The resulting dataset should have following types:

Dataset[(Integer, String, Double, String)]

and the properties should be split such that:

+---+--------------------+-------+--------------------+
| id|       text         | value |    property        |
+---+--------------------+-------+--------------------+
|  0|Lorem ipsum dolor...|    1.0|        prp1        |
|  0|Lorem ipsum dolor...|    1.0|        prp2        |
|  0|Lorem ipsum dolor...|    1.0|        prp3        |
|  1|Lorem ipsum dolor...|    2.0|        prp4        |
|  1|Lorem ipsum dolor...|    2.0|        prp5        |
|  1|Lorem ipsum dolor...|    2.0|        prp6        |
Jacek Laskowski
  • 72,696
  • 27
  • 242
  • 420
Adam
  • 1,054
  • 1
  • 12
  • 26

3 Answers3

7

explode is often suggested, but it's from the untyped DataFrame API and given you use Dataset, I think flatMap operator might be a better fit (see org.apache.spark.sql.Dataset).

flatMap[U](func: (T) ⇒ TraversableOnce[U])(implicit arg0: Encoder[U]): Dataset[U]

(Scala-specific) Returns a new Dataset by first applying a function to all elements of this Dataset, and then flattening the results.

You could use it as follows:

val ds = Seq(
  (0, "Lorem ipsum dolor", 1.0, Array("prp1", "prp2", "prp3")))
  .toDF("id", "text", "value", "properties")
  .as[(Integer, String, Double, scala.List[String])]

scala> ds.flatMap { t => 
  t._4.map { prp => 
    (t._1, t._2, t._3, prp) }}.show
+---+-----------------+---+----+
| _1|               _2| _3|  _4|
+---+-----------------+---+----+
|  0|Lorem ipsum dolor|1.0|prp1|
|  0|Lorem ipsum dolor|1.0|prp2|
|  0|Lorem ipsum dolor|1.0|prp3|
+---+-----------------+---+----+

// or just using for-comprehension
for {
  t <- ds
  prp <- t._4
} yield (t._1, t._2, t._3, prp)
Jacek Laskowski
  • 72,696
  • 27
  • 242
  • 420
  • 1
    You could use DataFrame as well, but it is better with DataSet of course. I borrowed your example and changed it for DataFrames https://stackoverflow.com/questions/40397740/replicate-spark-row-n-times/54898966#54898966#answer-54898966 – ruloweb Feb 27 '19 at 06:02
4

You can use explode:

df.withColumn("property", explode($"property"))

Example:

val df = Seq((1, List("a", "b"))).toDF("A", "B")   
// df: org.apache.spark.sql.DataFrame = [A: int, B: array<string>]

df.withColumn("B", explode($"B")).show
+---+---+
|  A|  B|
+---+---+
|  1|  a|
|  1|  b|
+---+---+
Psidom
  • 209,562
  • 33
  • 339
  • 356
1

Here's one way to do it:

val myRDD = sc.parallelize(Array(
  (0, "text0", 1.0, List("prp1", "prp2", "prp3")),
  (1, "text1", 2.0, List("prp4", "prp5", "prp6")),
  (2, "text2", 3.0, List("prp7", "prp8", "prp9"))
)).map{
  case (i, t, v, ps) => ((i, t, v), ps)
}.flatMapValues(x => x).map{
  case ((i, t, v), p) => (i, t, v, p)
}
Leo C
  • 22,006
  • 3
  • 26
  • 39
  • OH, no. Is this RDD API? Why would people want to do it in the age of Dataset? – Jacek Laskowski Apr 22 '17 at 08:32
  • 1
    I think both RDD and DataSet [have their place](https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html), although in this case I agree directly working off the DataSet is a better approach. – Leo C Apr 22 '17 at 15:21