20

Let us say I have the following two RDDs, with the following key-pair values.

rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]

and

rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]

Now, I want to join them by key values, so for example I want to return the following

ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3, value4, value7]) ] 

How I can I do this, in spark using Python or Scala? One way is to use join, but join would create a tuple inside the tuple. But I want to only have one tuple per key value pair.

user4157124
  • 2,809
  • 13
  • 27
  • 42
MetallicPriest
  • 29,191
  • 52
  • 200
  • 356

2 Answers2

8

Just use join and then map the resulting rdd.

rdd1.join(rdd2).map(case (k, (ls, rs)) => (k, ls ++ rs))
lmm
  • 17,386
  • 3
  • 26
  • 37
  • I have a rdd of totals and rdd of counts. How would I join them by the same keys to create an average. Open to the possibility I'm doing it wrong. – Justin Thomas Nov 11 '15 at 18:02
  • 1
    This should be a separate question, but: if you have `values: RDD[(K, Float)]` and `counts: RDD[(K, Int)]` (map them into this shape if they're not) then you can do `values.join(counts)` to get an `RDD[(K, (Float, Int))]`, `map` away the `K`, and then you can do the average - there's probably a function for this already, but the hard way is `reduce {case ((v1, count1), (v2, count2)) => ((v1 * count1 + v2 * count2) / (count1 + count2), (count1 + count2))}` assuming my maths is right. – lmm Nov 16 '15 at 20:11
  • Yeah that's what ended up being the solution. Thanks! – Justin Thomas Nov 16 '15 at 21:15
8

I would union the two RDDs and to a reduceByKey to merge the values.

(rdd1 union rdd2).reduceByKey(_ ++ _)
maasg
  • 37,100
  • 11
  • 88
  • 115