4

I want to verify if an array contain a string in Pyspark (Spark < 2.4).

Example Dataframe:

column_1 <Array>           |    column_2 <String>
--------------------------------------------
["2345","98756","8794"]    |       8794
--------------------------------------------
["8756","45678","987563"]  |       1234
--------------------------------------------
["3475","8956","45678"]    |       3475
--------------------------------------------

I would like to compare the two columns column_1 and column_2. if column_1 contain column_2 I should skip it's value from column_1. I did an udf to soustract column_2 from column_1, but is not working:

def contains(x, y):
        try:
            sx, sy = set(x), set(y)
            if len(sx) == 0:
                return sx
            elif len(sy) == 0:
                return sx
            else:
                return sx - sy            
        # in exception, for example `x` or `y` is None (not a list)
        except:
            return sx
    udf_contains = udf(contains, 'string')
    new_df = my_df.withColumn('column_1', udf_contains(my_df.column_1, my_df.column_2))  

Expect result:

column_1 <Array>           |    column_2 <String>
--------------------------------------------------
["2345","98756"]           |       8794
--------------------------------------------------
["8756","45678","987563"]  |       1234
--------------------------------------------------
["8956","45678"]           |       3475
--------------------------------------------------

How can I do it knowing that sometimes / cases I have column_1 is [] and column_2 is null ? Thank you

pissall
  • 7,109
  • 2
  • 25
  • 45
verojoucla
  • 599
  • 2
  • 12
  • 23
  • 1
    check `udf_contains = udf(lambda x,y: [e for e in x if e != y], 'array')` – jxc Nov 13 '19 at 12:24
  • 3
    if x can be null or non-list. `udf(lambda x,y: [e for e in x if e != y] if isinstance(x, list) else x`, 'array') – jxc Nov 13 '19 at 12:28
  • @jxc I need to your help :) https://stackoverflow.com/questions/58875531/concatenate-array-pyspark/58875920#58875920 – verojoucla Nov 15 '19 at 11:34

1 Answers1

5

Spark 2.4.0+

Try array_remove. It is available since spark 2.4.0:

val df = Seq(
    (Seq("2345","98756","8794"), "8794"), 
    (Seq("8756","45678","987563"), "1234"), 
    (Seq("3475","8956","45678"), "3475"),
    (Seq(), "empty"),
    (null, "null")
).toDF("column_1", "column_2")
df.show(5, false)

df
    .select(
        $"column_1",
        $"column_2",
        array_remove($"column_1", $"column_2") as "diff"
    ).show(5, false)

It will return:

+---------------------+--------+
|column_1             |column_2|
+---------------------+--------+
|[2345, 98756, 8794]  |8794    |
|[8756, 45678, 987563]|1234    |
|[3475, 8956, 45678]  |3475    |
|[]                   |empty   |
|null                 |null    |
+---------------------+--------+

+---------------------+--------+---------------------+
|column_1             |column_2|diff                 |
+---------------------+--------+---------------------+
|[2345, 98756, 8794]  |8794    |[2345, 98756]        |
|[8756, 45678, 987563]|1234    |[8756, 45678, 987563]|
|[3475, 8956, 45678]  |3475    |[8956, 45678]        |
|[]                   |empty   |[]                   |
|null                 |null    |null                 |
+---------------------+--------+---------------------+

Sorry for scala, I suppose it is quite easy to make the same with pyspark.

Spark < 2.4.0

%pyspark

from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType, StringType


data = [
    (["2345","98756","8794"], "8794"), 
    (["8756","45678","987563"], "1234"), 
    (["3475","8956","45678"], "3475"),
    ([], "empty"),
    (None,"null")    
    ]
df = spark.createDataFrame(data, ['column_1', 'column_2'])
df.printSchema()
df.show(5, False)

def contains(x, y):
    if x is None or y is None:
        return x
    else:
        sx, sy = set(x), set([y])
        return list(sx - sy)
udf_contains = udf(contains, ArrayType(StringType()))

df.select("column_1", "column_2", udf_contains("column_1", "column_2")).show(5, False)

result:

root
 |-- column_1: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- column_2: string (nullable = true)
+---------------------+--------+
|column_1             |column_2|
+---------------------+--------+
|[2345, 98756, 8794]  |8794    |
|[8756, 45678, 987563]|1234    |
|[3475, 8956, 45678]  |3475    |
|[]                   |empty   |
|null                 |null    |
+---------------------+--------+
+---------------------+--------+----------------------------+
|column_1             |column_2|contains(column_1, column_2)|
+---------------------+--------+----------------------------+
|[2345, 98756, 8794]  |8794    |[2345, 98756]               |
|[8756, 45678, 987563]|1234    |[8756, 987563, 45678]       |
|[3475, 8956, 45678]  |3475    |[8956, 45678]               |
|[]                   |empty   |[]                          |
|null                 |null    |null                        |
+---------------------+--------+----------------------------+
shuvalov
  • 4,713
  • 2
  • 20
  • 17
  • thanks for your help, I just did it like this : df.select(array_remove(df.data, 1)).collect(), but I got "TypeError: 'Column' object is not callable" maybe because I used a spark < 2.4. I already mentioned it in my question above. – verojoucla Nov 13 '19 at 12:12
  • 1
    @verojoucla I added spark < 2.4 version with pyspark. Your code snippet doesn't working due to set for string returns set with single chars, i.e. `set("abc")` > `set(['a', 'c', 'b'])` – shuvalov Nov 13 '19 at 12:34