9

I want to calculate the Jaro Winkler distance between two columns of a PySpark DataFrame. Jaro Winkler distance is available through pyjarowinkler package on all nodes.

pyjarowinkler works as follows:

from pyjarowinkler import distance
distance.get_jaro_distance("A", "A", winkler=True, scaling=0.1)

Output:

1.0

I am trying to write a Pandas UDF to pass two columns as Series and calculate the distance using lambda function. Here's how I am doing it:

@pandas_udf("float", PandasUDFType.SCALAR)
def get_distance(col1, col2):
    import pandas as pd
    distance_df  = pd.DataFrame({'column_A': col1, 'column_B': col2})
    distance_df['distance'] = distance_df.apply(lambda x: distance.get_jaro_distance(str(distance_df['column_A']), str(distance_df['column_B']), winkler = True, scaling = 0.1))
    return distance_df['distance']

temp = temp.withColumn('jaro_distance', get_distance(temp.x, temp.x))

I should be able to pass any two string columns in the above function. I am getting the following output:

+---+---+---+-------------+
|  x|  y|  z|jaro_distance|
+---+---+---+-------------+
|  A|  1|  2|         null|
|  B|  3|  4|         null|
|  C|  5|  6|         null|
|  D|  7|  8|         null|
+---+---+---+-------------+

Expected Output:

+---+---+---+-------------+
|  x|  y|  z|jaro_distance|
+---+---+---+-------------+
|  A|  1|  2|          1.0|
|  B|  3|  4|          1.0|
|  C|  5|  6|          1.0|
|  D|  7|  8|          1.0|
+---+---+---+-------------+

I suspect this might be because str(distance_df['column_A']) is not correct. It contains the concatenated string of all row values.

While this code works for me:

@pandas_udf("float", PandasUDFType.SCALAR)
def get_distance(col):
    return col.apply(lambda x: distance.get_jaro_distance(x, "A", winkler = True, scaling = 0.1))

temp = temp.withColumn('jaro_distance', get_distance(temp.x))

Output:

+---+---+---+-------------+
|  x|  y|  z|jaro_distance|
+---+---+---+-------------+
|  A|  1|  2|          1.0|
|  B|  3|  4|          0.0|
|  C|  5|  6|          0.0|
|  D|  7|  8|          0.0|
+---+---+---+-------------+

Is there a way to do this with Pandas UDF? I'm dealing with millions of records so UDF will be expensive but still acceptable if it works. Thanks.

K. K.
  • 552
  • 1
  • 11
  • 20

2 Answers2

13

The error was from your function in the df.apply method, adjust it to the following should fix it:

@pandas_udf("float", PandasUDFType.SCALAR)
def get_distance(col1, col2):
    import pandas as pd
    distance_df  = pd.DataFrame({'column_A': col1, 'column_B': col2})
    distance_df['distance'] = distance_df.apply(lambda x: distance.get_jaro_distance(x['column_A'], x['column_B'], winkler = True, scaling = 0.1), axis=1)
    return distance_df['distance']

However, Pandas df.apply method is not vectorised which beats the purpose why we need pandas_udf over udf in PySpark. A faster and less overhead solution is to use list comprehension to create the returning pd.Series (check this link for more discussion about Pandas df.apply and its alternatives):

from pandas import Series

@pandas_udf("float", PandasUDFType.SCALAR)
def get_distance(col1, col2):
   return Series([ distance.get_jaro_distance(c1, c2, winkler=True, scaling=0.1) for c1,c2 in zip(col1, col2) ])

df.withColumn('jaro_distance', get_distance('x', 'y')).show()
+---+---+---+-------------+
|  x|  y|  z|jaro_distance|
+---+---+---+-------------+
| AB| 1B|  2|         0.67|
| BB| BB|  4|          1.0|
| CB| 5D|  6|          0.0|
| DB|B7F|  8|         0.61|
+---+---+---+-------------+
jxc
  • 13,553
  • 4
  • 16
  • 34
  • Good point about `apply()` not being vectorised. +1 for that. But list comprehension is row-wise operation, too and is not vectorised. – Azhar Khan Nov 23 '22 at 06:05
-1

You can union all the data frame first, partition by the same partition key after the partitions were shuffled and distributed to the worker nodes, and restore them before the pandas computing. Pls check the example where I wrote a small toolkit for this scenario: SparkyPandas

  • U do need to post a solution and not link to your repo … please provide more context – n1tk Feb 23 '22 at 19:47