0

I have been building my application on Python but for some reason I need to put it on a distributed environment, so I'm trying to build and application

using Spark but unable to come up with a code as fast as shift in Pandas.

enter image description here

mask = (df['name_x'].shift(0) == df['name_y'].shift(0)) & \ 
(df['age_x'].shift(0) == df['age_y'].shift(0))
df = df[~mask1]

Where

mask.tolist()                                               

gives

[True, False, True, False]

The final result df will contain only two rows (2nd and 4th). Basically trying to remove rows where, [name_x, age_x]col duplicates if present on [name_y,age_y]col.

Above code is on Pandas dataframe. What would be the closest PySpark code which is as efficient but without importing Pandas?

I have checked Window on Spark but not sure of it.

Community
  • 1
  • 1
Jon..
  • 430
  • 1
  • 5
  • 16

1 Answers1

2

shift plays no role in your code. This

import pandas as pd 

df = pd.DataFrame({
    "name_x" : ["ABC", "CDF", "DEW", "ABC"],
    "age_x": [20, 20, 22, 21],
    "name_y" : ["ABC", "CDF", "DEW", "ABC"],
    "age_y" : [20, 21, 22, 19],
})

mask1 = (df['name_x'].shift(0) == df['name_y'].shift(0)) & \
  (df['age_x'].shift(0) == df['age_y'].shift(0))
df[~mask1]

#  name_x  age_x name_y  age_y
# 1    CDF     20    CDF     21
# 3    ABC     21    ABC     19

is just equivalent to

mask2 = (df['name_x'] == df['name_y']) & (df['age_x'] == df['age_y'])
df[~mask2]

#   name_x  age_x name_y  age_y
# 1    CDF     20    CDF     21
# 3    ABC     21    ABC     19

Therefore all you need is filter:

sdf = spark.createDataFrame(df)

smask = ~((sdf["name_x"] == sdf["name_y"]) & (sdf["age_x"] == sdf["age_y"]))
sdf.filter(smask).show()
# +------+-----+------+-----+
# |name_x|age_x|name_y|age_y|
# +------+-----+------+-----+
# |   CDF|   20|   CDF|   21|
# |   ABC|   21|   ABC|   19|
# +------+-----+------+-----+

which, by De Morgan's laws, can be simplified to

(sdf["name_x"] != sdf["name_y"]) | (sdf["age_x"] != sdf["age_y"])

In general, shift can be expressed with Window functions.