5

I have a DataFrame that looks like this:

scala> data.show
+-----+---+---------+
|label| id| features|
+-----+---+---------+
|  1.0|  1|[1.0,2.0]|
|  0.0|  2|[5.0,6.0]|
|  1.0|  1|[3.0,4.0]|
|  0.0|  2|[7.0,8.0]|
+-----+---+---------+

I want to regroup the features based on "id" so I can get the following:

scala> data.show
+---------+---+-----------------+
|    label| id| features        |
+---------+---+-----------------+
|  1.0,1.0|  1|[1.0,2.0,3.0,4.0]|
|  0.0,0.0|  2|[5.0,6.0,7.8,8.0]|
+---------+---+-----------------+

This is the code I am using to generate the mentioned DataFrame

val rdd = sc.parallelize(List((1.0, 1, Vectors.dense(1.0, 2.0)), (0.0, 2, Vectors.dense(5.0, 6.0)), (1.0, 1, Vectors.dense(3.0, 4.0)), (0.0, 2, Vectors.dense(7.0, 8.0))))
val data = rdd.toDF("label", "id", "features")

I have been trying different things with both RDD and DataFrames. The most "promising" approach so far has been to filter based on "id"

data.filter($"id".equalTo(1))

+-----+---+---------+
|label| id| features|
+-----+---+---------+
|  1.0|  1|[1.0,2.0]|
|  1.0|  1|[3.0,4.0]|
+-----+---+---------+

But I have two bottlenecks now:

1) How to automatize the filtering for all distinct values that "id" could have?

The following generates an error:

data.select("id").distinct.foreach(x => data.filter($"id".equalTo(x)))

2) How to concatenate common "features" respect to a given "id". Have not tried much since I am still stuck on 1)

Any suggestion is more than welcome

Note: For clarification "label" is always the same for every occurrence of "id". Sorry for the confusion, a simple extension of my task would be also to group the "labels" (updated example)

Community
  • 1
  • 1
Jacob
  • 83
  • 1
  • 9

1 Answers1

6

I believe there is no efficient way to achieve what you want and the additional order requirement makes doesn't make situation better. The cleanest way I can think of is groupByKey like this:

import org.apache.spark.mllib.linalg.{Vectors, Vector}
import org.apache.spark.sql.functions.monotonicallyIncreasingId
import org.apache.spark.sql.Row
import org.apache.spark.rdd.RDD


val pairs: RDD[((Double, Int), (Long, Vector))] = data
  // Add row identifiers so we can keep desired order
  .withColumn("uid", monotonicallyIncreasingId)
  // Create PairwiseRDD where (label, id) is a key
  // and (row-id, vector is a value)
  .map{case Row(label: Double, id: Int, v: Vector, uid: Long) => 
    ((label, id), (uid, v))}

val rows = pairs.groupByKey.mapValues(xs => {
  val vs = xs
    .toArray
    .sortBy(_._1) // Sort by row id to keep order
    .flatMap(_._2.toDense.values) // flatmap vector values

  Vectors.dense(vs) // return concatenated vectors 

}).map{case ((label, id), v) => (label, id, v)} // Reshape

val grouped = rows.toDF("label", "id", "features")

grouped.show

// +-----+---+-----------------+
// |label| id|         features|
// +-----+---+-----------------+
// |  0.0|  2|[5.0,6.0,7.0,8.0]|
// |  1.0|  1|[1.0,2.0,3.0,4.0]|
// +-----+---+-----------------+

It is also possible to use an UDAF similar to the one I've proposed for SPARK SQL replacement for mysql GROUP_CONCAT aggregate function but it is even less efficient than this.

Community
  • 1
  • 1
zero323
  • 322,348
  • 103
  • 959
  • 935
  • Thanks @zero323 ! I can confirm that your code performs the desired task. I am still trying to get into the details of all the involved steps. By the way I got an error when running the code in the spark-shell, I had to add ";" before Vectors.dense(vs). Do you have any recommendation regarding setting up an environment for testing this kind of prototypes? So far I have been using a simple text editor and the spark-shell but I wonder if there is a better approach. – Jacob Nov 26 '15 at 11:00
  • I've added some comments - I hope it helps (and please don't forget to upvote :) Personally I use REPL and my-favorite-editor (last part vary from language to language). For complex, reusable components I sometimes use IntelliJ IDEA but prototyping is still so much faster in REPL it is a last resort. – zero323 Nov 26 '15 at 18:02
  • I appreciate it @zero323, I have upvoted your answer. – Jacob Nov 26 '15 at 21:57
  • Thanks @Jacob. If you need any explanations feel free to ask. – zero323 Nov 26 '15 at 22:01