0

I have this prbolem, I have one of this kind RDD[(String, Array[String]), and I would like extract from it a RDD[Array[String]] that contains the values grouped by key:

e.g:

val x :RDD[(String, Array[String]) = 
RDD[(a, Array[ "ra", "re", "ri"]),
(a, Array[ "ta", "te", "ti"]),
(b, Array[ "pa", "pe", "pi"])]

I would like get:

val result: RDD[(String, RDD[Array[String]]) = 
RDD[(a, RDD[Array[("ra", "re", "ri"),( "ta", "te", "ti")]]),
(b,  RDD[Array[("pa", "pe", "pi"),...]])
,...]
Will
  • 145
  • 2
  • 8

2 Answers2

1

A simple reduceByKey should solve your issue

x.reduceByKey((prev, next)=> (prev ++ next))
Ramesh Maharjan
  • 41,071
  • 6
  • 69
  • 97
1

As far as I know, Spark doesn't support nested RDDs (see this StackOverflow discussion).

In case nested arrays are good for what you need, a simple groupByKey will do:

val x = sc.parallelize(Seq(
  ("a", Array( "ra", "re", "ri" )),
  ("a", Array( "ta", "te", "ti" )),
  ("b", Array( "pa", "pe", "pi" ))
))

val x2 = x.groupByKey

x2.collect.foreach(println)
(a,CompactBuffer([Ljava.lang.String;@75043e31, [Ljava.lang.String;@18656538))
(b,CompactBuffer([Ljava.lang.String;@2cf30d2e))

x2.collect.foreach{ case (a, b) => println(a + ": " + b.map(_.mkString(" "))) }
a: List(ra re ri, ta te ti)
b: List(pa pe pi)
Leo C
  • 22,006
  • 3
  • 26
  • 39