0

i have a dataframe df1

id     transactions
1      [1, 3,3,3,2,5]
2      [1,2]
root
 |-- id: int (nullable = true)
 |-- transactions: array (nullable = false)
      |-- element: string(containsNull = true)
None

i have a dataframe df2

items         cost
[1, 3,3, 5]    2
[1, 5]      1

root
|-- items: array (nullable = false)
  |-- element: string (containsNull = true)
 |-- cost: int (nullable = true)
None

i have to check whether the items are in transactions, if so sum up the costs. [1,3,3,5] in [1,3,3,3,5] is True and [1,3,3,5] in [1,2] is False and so on.

result should be

id     transactions   score
1      [1,3,3,3,5]    3
2      [1,2]          null

I tried explode and join (inner, left_semi), methods but it all fails because of duplicates. Check all the elements of an array present in another array pyspark issubset(), array_intersect() also won't work.

I came across Python - verifying if one list is a subset of the other. I found the following solves the problem and much efficient.

from collections import Counter
not Counter([1,3,3,3,5])-Counter([1,3,3,4,5])
False
>>> not Counter([1,3,3,3,5])-Counter([1,3,3,5])
False
>>> not Counter([1,3,3,5])-Counter([1,3,3,3,5])
True

i tried the following

@udf("boolean")
def contains_all(x, y):
if x is not None and y is not None:
    return not (lambda y: dict(Counter(y)))-(lambda x: dict(Counter(x)))


(df1
.crossJoin(df2).groupBy("id", "transactions")
.agg(sum_(when(
    contains_all("transactions", "items"), col("cost")
)).alias("score"))
.show())

but it throws error. File "", line 39, in contains_all TypeError: unsupported operand type(s) for -: 'function' and 'function'

any other way to achieve this?

priya
  • 375
  • 5
  • 22

1 Answers1

1

Just updated the udf to hold for duplicates and not sure about the performance,

from pyspark.sql.functions import udf,array_sort,sum as sum_,when,col

dff = df1.crossjoin(df2)

dff = dff.withColumn('transaction',array_sort('transaction')).\
      withColumn('items',array_sort('items')) ## sorting here,it's needed in UDF

+---+---------------+------------+----+
| id|    transaction|       items|cost|
+---+---------------+------------+----+
|  1|[1, 2, 3, 3, 5]|[1, 3, 3, 5]|   2|
|  1|[1, 2, 3, 3, 5]|      [1, 5]|   1|
|  2|         [1, 2]|[1, 3, 3, 5]|   2|
|  2|         [1, 2]|      [1, 5]|   1|
+---+---------------+------------+----+

@udf('boolean')
def is_subset_w_dup(trans,itm):
    itertrans = iter(trans)
    return all(i in itertrans for i in itm)


dff.groupby('id','transaction').agg(sum_(when(is_subset_w_dup('transaction','items'),col('cost'))).alias('score')).show()

+---+---------------+-----+
| id|    transaction|score|
+---+---------------+-----+
|  2|         [1, 2]| null|
|  1|[1, 2, 3, 3, 5]|    3|
+---+---------------+-----+
Suresh
  • 5,678
  • 2
  • 24
  • 40
  • Thanks!! Works perfect. But for just 1000 rows it takes 2 minutes. I don't know how it will scale for a large dataset. – priya Apr 01 '19 at 08:16