2

I have two RDDs with same columns:
rdd1 :-

+-----------------+
|mid|uid|frequency|
+-----------------+
| m1| u1|        1|
| m1| u2|        1|
| m2| u1|        2|
+-----------------+

rdd2 :-

+-----------------+
|mid|uid|frequency|
+-----------------+
| m1| u1|       10|
| m2| u1|       98|
| m3| u2|       21|
+-----------------+

I want to calculate sum of frequencies based on mid and uid. Result should be something like:

+-----------------+
|mid|uid|frequency|
+-----------------+
| m1| u1|       11|
| m2| u1|      100|
| m3| u2|       21|
+-----------------+

Thanks in advance.

EDIT: I achieved the solution in this way as well (Using map-reduce):

from pyspark.sql.functions import col

data1 = [("m1","u1",1),("m1","u2",1),("m2","u1",2)]
data2 = [("m1","u1",10),("m2","u1",98),("m3","u2",21)]
df1 = sqlContext.createDataFrame(data1,['mid','uid','frequency'])
df2 = sqlContext.createDataFrame(data2,['mid','uid','frequency'])

df3 = df1.unionAll(df2)
df4 = df3.map(lambda bbb: ((bbb['mid'], bbb['uid']), int(bbb['frequency'])))\
             .reduceByKey(lambda a, b: a+b)

p = df4.map(lambda p: (p[0][0], p[0][1], p[1])).toDF()

p = p.select(col("_1").alias("mid"), \
             col("_2").alias("uid"), \
             col("_3").alias("frequency"))

p.show()

Output:

+---+---+---------+
|mid|uid|frequency|
+---+---+---------+
| m2| u1|      100|
| m1| u1|       11|
| m1| u2|        1|
| m3| u2|       21|
+---+---+---------+
rootcss
  • 375
  • 2
  • 14

2 Answers2

1

You just need to perform a group by mid and uid and perform a sum operation :

data1 = [("m1","u1",1),("m1","u2",1),("m2","u1",2)]
data2 = [("m1","u1",10),("m2","u1",98),("m3","u2",21)]
df1 = sqlContext.createDataFrame(data1,['mid','uid','frequency'])
df2 = sqlContext.createDataFrame(data2,['mid','uid','frequency'])

df3 = df1.unionAll(df2)

df4 = df3.groupBy(df3.mid,df3.uid).sum() \
         .withColumnRenamed("sum(frequency)","frequency")

df4.show()

# +---+---+---------+
# |mid|uid|frequency|
# +---+---+---------+
# | m1| u1|       11|
# | m1| u2|        1|
# | m2| u1|      100|
# | m3| u2|       21|
# +---+---+---------+
eliasah
  • 39,588
  • 11
  • 124
  • 154
0

I achieved the solution in this way as well (Using map-reduce):

from pyspark.sql.functions import col

data1 = [("m1","u1",1),("m1","u2",1),("m2","u1",2)]
data2 = [("m1","u1",10),("m2","u1",98),("m3","u2",21)]
df1 = sqlContext.createDataFrame(data1,['mid','uid','frequency'])
df2 = sqlContext.createDataFrame(data2,['mid','uid','frequency'])

df3 = df1.unionAll(df2)
df4 = df3.map(lambda bbb: ((bbb['mid'], bbb['uid']), int(bbb['frequency'])))\
             .reduceByKey(lambda a, b: a+b)

p = df4.map(lambda p: (p[0][0], p[0][1], p[1])).toDF()

p = p.select(col("_1").alias("mid"), \
             col("_2").alias("uid"), \
             col("_3").alias("frequency"))

p.show()

Output:

+---+---+---------+
|mid|uid|frequency|
+---+---+---------+
| m2| u1|      100|
| m1| u1|       11|
| m1| u2|        1|
| m3| u2|       21|
+---+---+---------+
rootcss
  • 375
  • 2
  • 14
  • The only issue with this solution is that you loose all the optimization done by the tungsten project over `DataFrame`s. http://stackoverflow.com/questions/31780677/efficient-pairrdd-operations-on-dataframe-with-spark-sql-group-by – eliasah Apr 04 '17 at 07:11